🤖 AI Summary
To address the high inference cost, significant latency, and excessive cloud dependency of large language models (LLMs) in cloud-edge collaborative settings, this paper proposes a progressive cloud-edge inference paradigm: the cloud-based large model generates a lightweight semantic skeleton, while edge-resident small models concurrently fill in fine-grained details. We introduce, for the first time, a semantic-driven dynamic task scheduling mechanism and a multi-model ensemble framework to jointly optimize throughput and latency under quality constraints. Key technical contributions include semantic skeleton generation, edge-side parallel expansion, ensemble learning for small models, and cloud-edge coordinated scheduling. Experimental evaluation demonstrates that, compared to the state-of-the-art system, our approach achieves 1.5×–2× higher throughput, reduces end-to-end latency by up to 43%, and maintains or improves response quality.
📝 Abstract
Large language models (LLMs), while driving a new wave of interactive AI applications across numerous domains, suffer from high inference costs and heavy cloud dependency. Motivated by the redundancy phenomenon in linguistics, we propose a progressive inference paradigm over cloud and edge, i.e., firstly generating the sketch of the answer by LLMs at cloud, and then conducting parallel extension to fill in details by small models (SLMs) at edge. Progressive inference offers potential benefits to improve throughput and reduce inference latency while facing key implementation challenges, including decreased response quality from SLMs, a tradeoff between the brevity and comprehensiveness of sketches, as well as increased latency caused by network transmission and edge inference. In this work, we propose and implement PICE, an LLM serving system with semantic-level cloud-edge collaboration, enhancing inference throughput and quality through dynamic inference task scheduling, ensemble learning, and parallel edge inference. Extensive testbed experiments illustrate that our approach achieves $1.5-2 imes$ throughput enhancement and up to 43% latency reduction, while also potentially enhancing the quality compared to SOTA systems.