🤖 AI Summary
In wireless collaborative edge LLM systems, resource-constrained devices suffer from significant cold-start latency due to model loading overhead.
Method: This paper proposes the first latency-aware framework that explicitly models model loading time within collaborative inference scheduling. It dynamically overlaps model loading, computation, and communication across devices, jointly optimizing layer partitioning and device assignment to hide loading latency and minimize device idleness. The problem is formulated as a mixed-integer nonlinear program (MINLP), and an efficient dynamic programming algorithm is designed to compute the optimal partitioning strategy.
Contribution/Results: Extensive experiments across diverse device configurations and LLMs demonstrate that the method substantially reduces cold-start latency compared to state-of-the-art baselines, while simultaneously satisfying stringent low-latency and on-device privacy requirements.
📝 Abstract
While deploying large language models on edge devices promises low-latency and privacy-preserving AI services, it is hindered by limited device resources. Although pipeline parallelism facilitates distributed inference, existing approaches often ignore the cold-start latency caused by on-demand model loading. In this paper, we propose a latency-aware scheduling framework that overlaps model loading with computation and communication to minimize total inference latency. Based on device and model parameters, the framework dynamically adjusts layer partitioning and allocation to effectively hide loading time, thereby eliminating as many idle periods as possible. We formulate the problem as a Mixed-Integer Non-Linear Program and design an efficient dynamic programming algorithm to optimize model partitioning and device assignment. Experimental results show that the proposed method significantly reduces cold-start latency compared to baseline strategies.