🤖 AI Summary
This paper addresses the deployment of multi-model inference pipelines on resource-constrained edge devices. We propose an end-to-end adaptive configuration framework that, for the first time, explicitly incorporates device resource constraints into joint optimization decisions. Our method integrates residual feature extraction, LSTM-based workload forecasting, and policy-gradient reinforcement learning to jointly optimize QoS guarantees (e.g., latency and throughput), operational cost, and real-time adaptability. Evaluated on a real Kubernetes-based edge cluster, the framework achieves a 27% reduction in average inference latency, a 31% increase in throughput, a 22% decrease in deployment cost, and over 42% faster configuration decision-making for complex pipelines—outperforming state-of-the-art baselines. The core contributions are: (i) a resource-aware joint optimization model that unifies hardware constraints with pipeline scheduling and scaling decisions; and (ii) a lightweight, learning-driven configuration mechanism enabling efficient, online adaptation under dynamic edge conditions.
📝 Abstract
The growing demand for real-time processing tasks is driving the need for multi-model inference pipelines on edge devices. However, cost-effectively deploying these pipelines while optimizing Quality of Service (QoS) and costs poses significant challenges. Existing solutions often neglect device resource constraints, focusing mainly on inference accuracy and cost efficiency. To address this, we develop a framework for configuring multi-model inference pipelines. Specifically: 1) We model the decision-making problem by considering the pipeline's QoS, costs, and device resource limitations. 2) We create a feature extraction module using residual networks and a load prediction model based on Long Short-Term Memory (LSTM) to gather comprehensive node and pipeline status information. Then, we implement a Reinforcement Learning (RL) algorithm based on policy gradients for online configuration decisions. 3) Experiments conducted in a real Kubernetes cluster show that our approach significantly improve QoS while reducing costs and shorten decision-making time for complex pipelines compared to baseline algorithms.