PICE: A Semantic-Driven Progressive Inference System for LLM Serving in Cloud-Edge Networks

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference cost, significant latency, and excessive cloud dependency of large language models (LLMs) in cloud-edge collaborative settings, this paper proposes a progressive cloud-edge inference paradigm: the cloud-based large model generates a lightweight semantic skeleton, while edge-resident small models concurrently fill in fine-grained details. We introduce, for the first time, a semantic-driven dynamic task scheduling mechanism and a multi-model ensemble framework to jointly optimize throughput and latency under quality constraints. Key technical contributions include semantic skeleton generation, edge-side parallel expansion, ensemble learning for small models, and cloud-edge coordinated scheduling. Experimental evaluation demonstrates that, compared to the state-of-the-art system, our approach achieves 1.5×–2× higher throughput, reduces end-to-end latency by up to 43%, and maintains or improves response quality.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs), while driving a new wave of interactive AI applications across numerous domains, suffer from high inference costs and heavy cloud dependency. Motivated by the redundancy phenomenon in linguistics, we propose a progressive inference paradigm over cloud and edge, i.e., firstly generating the sketch of the answer by LLMs at cloud, and then conducting parallel extension to fill in details by small models (SLMs) at edge. Progressive inference offers potential benefits to improve throughput and reduce inference latency while facing key implementation challenges, including decreased response quality from SLMs, a tradeoff between the brevity and comprehensiveness of sketches, as well as increased latency caused by network transmission and edge inference. In this work, we propose and implement PICE, an LLM serving system with semantic-level cloud-edge collaboration, enhancing inference throughput and quality through dynamic inference task scheduling, ensemble learning, and parallel edge inference. Extensive testbed experiments illustrate that our approach achieves $1.5-2 imes$ throughput enhancement and up to 43% latency reduction, while also potentially enhancing the quality compared to SOTA systems.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Cost Efficiency
Response Time
Innovation

Methods, ideas, or system contributions that make the work stand out.

PICE System
Large Language Models (LLMs)
Small Language Models (SLMs)
🔎 Similar Papers
No similar papers found.
H
Huiyou Zhan
LINKE Lab and the CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China (USTC), Hefei 230027, China
X
Xuan Zhang
LINKE Lab and the CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China (USTC), Hefei 230027, China
Haisheng Tan
Haisheng Tan
Computer Science, University of Science and Technology of China
Networking AlgorithmsInternet of ThingsCloud ComputingEdge Computing
Han Tian
Han Tian
University of Science and Technology of China
Machine learningnetworkingprivacy computing
D
Dongping Yong
LINKE Lab and the CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China (USTC), Hefei 230027, China
Junyang Zhang
Junyang Zhang
California Institute of Technology, Stanford University, University of California, Irvine
machine learning and ML systemroboticsdigital designsemiconductorintegrated circuits
X
Xiangyang Li
LINKE Lab and the CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China (USTC), Hefei 230027, China