Press Release

Jiaying Huang Explores Forecasting Strategies to Optimize Resource Scheduling in Cloud Platforms

A forecasting-driven framework integrates ARIMA, LSTM, and ensemble learning to optimize cloud resource scheduling. By predicting CPU, memory, and network demands in real time, it enhances utilization, reduces SLA violations, and provides a scalable, data-driven solution for intelligent cloud infrastructure management.

— As cloud computing technologies advance and service demands become increasingly diverse, cloud platforms face growing challenges in achieving accurate and efficient resource scheduling. To address this gap, a recent study proposes a forecasting-driven framework designed to improve the precision of resource allocation while reducing the likelihood of service interruptions. Published in the Journal of Computer, Signal, and System Research, the research introduces a layered architecture that integrates statistical and machine learning models for time series analysis, offering a scalable solution for cloud infrastructure planning.

The core of the framework combines ARIMA and LSTM to extract both linear trends and nonlinear fluctuations in resource usage data. Trained on high-frequency metrics from Alibaba Cloud Tianchi, the system demonstrated a minimum RMSE of 7.32 and consistent improvements across MAE and MAPE indicators. The hybrid model processes CPU, memory, and network load sequences to anticipate near-future demands and inform scheduling algorithms in advance. This hybrid modeling approach allows for continuous refinement as more data becomes available, supporting adaptive decision-making.

In addition to the dual-model structure, the study incorporates LightGBM-based ensemble learning, multivariate time series modeling, and hierarchical time windows that capture activity patterns across multiple time scales. Regularization techniques help reduce overfitting, while multi-channel LSTM structures simulate concurrent behavior across resource types. These methods are deployed in containerized environments and connected via API-based automation workflows, forming a complete pipeline from data collection to real-time scheduling optimization.

Tests on a hybrid cloud simulation platform confirmed that the forecasting framework increased average CPU utilization by 19.6 percent and reduced SLA violation incidents by over 50 percent. Compared with static strategies, the system demonstrated stronger adaptability to fluctuating loads and improved decision speed in scheduling systems.

The research was developed by Jiaying Huang, a software development engineer at Amazon EC2 Core Platform. Huang holds a Master’s degree in Data Informatics from the University of Southern California and a Bachelor’s degree in Computer Science from Nankai University. At Amazon, Huang has led efforts on automation systems that improved EC2 service quota request processing speed in general. Huang also contributed to internal demand-shaping frameworks, designing machine learning–driven guardrails to balance availability and utilization across EC2 regions. With experience spanning forecasting and large-scale system optimization, Huang focuses on applying predictive models to real-world infrastructure challenges in global-scale cloud environments.

This study offers a technical blueprint for integrating forecasting capabilities into cloud operations. Its combination of layered modeling and real-world deployment presents a practical solution for infrastructure teams seeking to improve resource efficiency through intelligent prediction.

Contact Info:
Name: Jiaying Huang
Email: Send Email
Organization: Jiaying Huang
Website: https://scholar.google.com/citations?user=Ax8IcCYAAAAJ&hl=en&authuser=1

Release ID: 89173014

If you detect any issues, problems, or errors in this press release content, kindly contact [email protected] to notify us (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). We will respond and rectify the situation in the next 8 hours.

Author

Related Articles

Back to top button