CSGO: Generalized Optimization for Cold Start in Wireless Collaborative Edge LLM Systems

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In wireless collaborative edge LLM systems, resource-constrained devices suffer from significant cold-start latency due to model loading overhead. Method: This paper proposes the first latency-aware framework that explicitly models model loading time within collaborative inference scheduling. It dynamically overlaps model loading, computation, and communication across devices, jointly optimizing layer partitioning and device assignment to hide loading latency and minimize device idleness. The problem is formulated as a mixed-integer nonlinear program (MINLP), and an efficient dynamic programming algorithm is designed to compute the optimal partitioning strategy. Contribution/Results: Extensive experiments across diverse device configurations and LLMs demonstrate that the method substantially reduces cold-start latency compared to state-of-the-art baselines, while simultaneously satisfying stringent low-latency and on-device privacy requirements.

Technology Category

Application Category

📝 Abstract
While deploying large language models on edge devices promises low-latency and privacy-preserving AI services, it is hindered by limited device resources. Although pipeline parallelism facilitates distributed inference, existing approaches often ignore the cold-start latency caused by on-demand model loading. In this paper, we propose a latency-aware scheduling framework that overlaps model loading with computation and communication to minimize total inference latency. Based on device and model parameters, the framework dynamically adjusts layer partitioning and allocation to effectively hide loading time, thereby eliminating as many idle periods as possible. We formulate the problem as a Mixed-Integer Non-Linear Program and design an efficient dynamic programming algorithm to optimize model partitioning and device assignment. Experimental results show that the proposed method significantly reduces cold-start latency compared to baseline strategies.
Problem

Research questions and friction points this paper is trying to address.

Minimizes cold-start latency in edge LLM systems
Optimizes model loading overlapping with computation
Dynamically adjusts layer partitioning for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latency-aware scheduling framework minimizes inference latency
Dynamic layer partitioning hides model loading time
Efficient algorithm optimizes model partitioning and assignment
🔎 Similar Papers
No similar papers found.