Adaptive Configuration Selection for Multi-Model Inference Pipelines in Edge Computing

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the deployment of multi-model inference pipelines on resource-constrained edge devices. We propose an end-to-end adaptive configuration framework that, for the first time, explicitly incorporates device resource constraints into joint optimization decisions. Our method integrates residual feature extraction, LSTM-based workload forecasting, and policy-gradient reinforcement learning to jointly optimize QoS guarantees (e.g., latency and throughput), operational cost, and real-time adaptability. Evaluated on a real Kubernetes-based edge cluster, the framework achieves a 27% reduction in average inference latency, a 31% increase in throughput, a 22% decrease in deployment cost, and over 42% faster configuration decision-making for complex pipelines—outperforming state-of-the-art baselines. The core contributions are: (i) a resource-aware joint optimization model that unifies hardware constraints with pipeline scheduling and scaling decisions; and (ii) a lightweight, learning-driven configuration mechanism enabling efficient, online adaptation under dynamic edge conditions.

Technology Category

Application Category

📝 Abstract
The growing demand for real-time processing tasks is driving the need for multi-model inference pipelines on edge devices. However, cost-effectively deploying these pipelines while optimizing Quality of Service (QoS) and costs poses significant challenges. Existing solutions often neglect device resource constraints, focusing mainly on inference accuracy and cost efficiency. To address this, we develop a framework for configuring multi-model inference pipelines. Specifically: 1) We model the decision-making problem by considering the pipeline's QoS, costs, and device resource limitations. 2) We create a feature extraction module using residual networks and a load prediction model based on Long Short-Term Memory (LSTM) to gather comprehensive node and pipeline status information. Then, we implement a Reinforcement Learning (RL) algorithm based on policy gradients for online configuration decisions. 3) Experiments conducted in a real Kubernetes cluster show that our approach significantly improve QoS while reducing costs and shorten decision-making time for complex pipelines compared to baseline algorithms.
Problem

Research questions and friction points this paper is trying to address.

Optimize QoS and costs for edge multi-model inference pipelines
Address device resource constraints in pipeline configuration
Reduce decision-making time for complex edge pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model QoS, costs, and device resource constraints
Use ResNet and LSTM for feature extraction
RL policy gradient for online configuration decisions
🔎 Similar Papers
J
Jinhao Sheng
Institute of Artificial Intelligence and Future Networks, Beijing Normal University, Zhuhai, China
Zhiqing Tang
Zhiqing Tang
Associate Professor, Beijing Normal University
Edge ComputingEdge AI SystemsContainerReinforcement Learning
Jianxiong Guo
Jianxiong Guo
Associate Professor of Computer Science, Beijing Normal University
IoT/Edge IntelligenceOnline/Federated LearningSocial ComputingCombinatorial Optimization
T
Tian Wang
Institute of Artificial Intelligence and Future Networks, Beijing Normal University, Zhuhai, China