🤖 AI Summary
In mobile edge computing (MEC), limited cache capacity at edge servers and high deep neural network (DNN) loading latency degrade user quality of experience (QoE). To address this, we propose a dynamic DNN architecture that decomposes a full model into independent, cacheable, and schedulable submodels. We formulate the first systematic joint optimization of submodel caching placement and request routing to balance inference accuracy and loading latency. We design CoCaR, an offline algorithm based on linear programming and randomized rounding, and extend it to CoCaR-OL, an online variant adaptive to dynamic request arrivals. Experiments show that CoCaR improves average inference accuracy by 46% over baseline methods; CoCaR-OL enhances user QoE by at least 32.3% in online settings, significantly outperforming existing decoupled caching-and-routing approaches.
📝 Abstract
Mobile edge computing (MEC) can pre-cache deep neural networks (DNNs) near end-users, providing low-latency services and improving users'quality of experience (QoE). However, caching all DNN models at edge servers with limited capacity is difficult, and the impact of model loading time on QoE remains underexplored. Hence, we introduce dynamic DNNs in edge scenarios, disassembling a complete DNN model into interrelated submodels for more fine-grained and flexible model caching and request routing solutions. This raises the pressing issue of jointly deciding request routing and submodel caching for dynamic DNNs to balance model inference precision and loading latency for QoE optimization. In this paper, we study the joint dynamic model caching and request routing problem in MEC networks, aiming to maximize user request inference precision under constraints of server resources, latency, and model loading time. To tackle this problem, we propose CoCaR, an offline algorithm based on linear programming and random rounding that leverages dynamic DNNs to optimize caching and routing schemes, achieving near-optimal performance. Furthermore, we develop an online variant of CoCaR, named CoCaR-OL, enabling effective adaptation to dynamic and unpredictable online request patterns. The simulation results demonstrate that the proposed CoCaR improves the average inference precision of user requests by 46% compared to state-of-the-art baselines. In addition, in online scenarios, CoCaR-OL achieves an improvement of no less than 32.3% in user QoE over competitive baselines.