Joint Optimization of DNN Model Caching and Request Routing in Mobile Edge Computing

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In mobile edge computing (MEC), limited cache capacity at edge servers and high deep neural network (DNN) loading latency degrade user quality of experience (QoE). To address this, we propose a dynamic DNN architecture that decomposes a full model into independent, cacheable, and schedulable submodels. We formulate the first systematic joint optimization of submodel caching placement and request routing to balance inference accuracy and loading latency. We design CoCaR, an offline algorithm based on linear programming and randomized rounding, and extend it to CoCaR-OL, an online variant adaptive to dynamic request arrivals. Experiments show that CoCaR improves average inference accuracy by 46% over baseline methods; CoCaR-OL enhances user QoE by at least 32.3% in online settings, significantly outperforming existing decoupled caching-and-routing approaches.

Technology Category

Application Category

📝 Abstract
Mobile edge computing (MEC) can pre-cache deep neural networks (DNNs) near end-users, providing low-latency services and improving users'quality of experience (QoE). However, caching all DNN models at edge servers with limited capacity is difficult, and the impact of model loading time on QoE remains underexplored. Hence, we introduce dynamic DNNs in edge scenarios, disassembling a complete DNN model into interrelated submodels for more fine-grained and flexible model caching and request routing solutions. This raises the pressing issue of jointly deciding request routing and submodel caching for dynamic DNNs to balance model inference precision and loading latency for QoE optimization. In this paper, we study the joint dynamic model caching and request routing problem in MEC networks, aiming to maximize user request inference precision under constraints of server resources, latency, and model loading time. To tackle this problem, we propose CoCaR, an offline algorithm based on linear programming and random rounding that leverages dynamic DNNs to optimize caching and routing schemes, achieving near-optimal performance. Furthermore, we develop an online variant of CoCaR, named CoCaR-OL, enabling effective adaptation to dynamic and unpredictable online request patterns. The simulation results demonstrate that the proposed CoCaR improves the average inference precision of user requests by 46% compared to state-of-the-art baselines. In addition, in online scenarios, CoCaR-OL achieves an improvement of no less than 32.3% in user QoE over competitive baselines.
Problem

Research questions and friction points this paper is trying to address.

Optimizing DNN model caching and request routing in edge computing
Balancing model inference precision against loading latency constraints
Maximizing user QoE under server resource and latency limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic DNNs split models into submodels for caching
CoCaR algorithm optimizes caching and routing jointly
Online variant CoCaR-OL adapts to dynamic request patterns
🔎 Similar Papers
No similar papers found.
S
Shuting Qiu
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
Fang Dong
Fang Dong
Southeast University
Edge CompuingCloudAIOT
Siyu Tan
Siyu Tan
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
R
Ruiting Zhou
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
D
Dian Shen
School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
Patrick P. C. Lee
Patrick P. C. Lee
The Chinese University of Hong Kong
storage systemsnetworksdistributed systemsdependability
Qilin Fan
Qilin Fan
Chongqing University
Anomaly DetectionEdge CachingNetwork Function Virtualization