π€ AI Summary
To address excessive end-to-end latency in edge AI caused by sequential model downloading and on-device inference, this paper proposes SLIDEβthe first framework enabling fine-grained parallelism between downloading and inference: users initiate inference on already-downloaded layers while concurrently receiving subsequent ones. The core innovation lies in modeling inter-layer recursive dependencies and jointly optimizing download bandwidth allocation, spectrum resource scheduling, and computational resource assignment. We formulate the problem within a multi-user downlink system and design a polynomial-time optimal algorithm. Experiments demonstrate that, under stringent latency and communication-resource constraints, SLIDE achieves significantly higher task throughput than conventional serial approaches. This validates the effectiveness, optimality, and scalability of the parallel download-inference architecture.
π Abstract
To support on-device inference, the next-generation mobile networks are expected to support real-time model downloading services to mobile users. However, powerful AI models typically have large model sizes, resulting in excessive end-to-end (E2E) downloading-and-inference (DAI) latency. To address this issue, we propose a simultaneous model downloading and inference (SLIDE) framework, which allows users to perform inference with downloaded layers while simultaneously receiving the remaining layers of the model. To this end, we formulate a task throughput maximization problem by jointly optimizing model provisioning, spectrum bandwidth allocation, and computing resource allocation for multi-user downlink systems. Unlike traditional DAI frameworks, SLIDE introduces recursive dependencies across layers, where inference latency depends recursively on the downloading bandwidth and computing resource allocation for each of the preceding layers. To solve this challenging problem, we design an efficient algorithm that acquires the optimal solution with polynomial-time complexity. Simulation results demonstrate that the proposed SLIDE framework significantly improves task throughput under latency and communication resource constraints compared with the conventional model downloading schemes.