Collaborative Device-Cloud LLM Inference through Reinforcement Learning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In device-cloud collaborative deployment of large language models (LLMs), query routing—deciding whether to execute inference locally or offload to the cloud—remains a critical challenge. Method: This paper proposes an end-device–autonomous offloading mechanism: after generating a local answer, the on-device LLM itself determines whether to offload the inference task to the cloud. We embed the routing decision at the end of the inference process and formulate it as an end-to-end reinforcement learning problem. To ensure unbiased gradient estimation, we adopt Group Adaptive Policy Gradient (G-ASPG); additionally, an adaptive prompt filtering mechanism is introduced to constrain cloud resource consumption. Results: Extensive experiments across multiple LLMs and benchmark datasets demonstrate that our approach significantly outperforms existing routing strategies. It substantially narrows the performance gap between device-cloud systems and pure cloud-only deployments, while jointly optimizing inference efficiency, answer accuracy, and controllability of cloud resource usage.

Technology Category

Application Category

📝 Abstract
Device-cloud collaboration has emerged as a promising paradigm for deploying large language models (LLMs), combining the efficiency of lightweight on-device inference with the superior performance of powerful cloud LLMs. An essential problem in this scenario lies in deciding whether a given query is best handled locally or delegated to the cloud. Existing approaches typically rely on external routers, implemented as binary classifiers, which often struggle to determine task difficulty from the prompt's surface pattern. To address these limitations, we propose a framework where the on-device LLM makes routing decisions at the end of its solving process, with this capability instilled through post-training. In particular, we formulate a reward maximization problem with carefully designed rewards that encourage effective problem solving and judicious offloading to the cloud. To solve this problem, we develop a group-adaptive policy gradient algorithm, featuring a group-level policy gradient, designed to yield an unbiased gradient estimator of the reward, and adaptive prompt filtering, developed to enforce the constraint on cloud LLM usage. Extensive experiments across models and benchmarks show that the proposed methodology consistently outperforms existing baselines and significantly narrows the gap to full cloud LLM performance.
Problem

Research questions and friction points this paper is trying to address.

Optimizing device-cloud routing decisions for LLM inference
Addressing limitations of external binary classifier routers
Maximizing reward through adaptive offloading and prompt filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-device LLM makes routing decisions post-solving
Reward maximization with designed problem-solving incentives
Group-adaptive policy gradient algorithm with unbiased estimator
🔎 Similar Papers
2024-03-11Pacific Rim International Conference on Artificial IntelligenceCitations: 2