🤖 AI Summary
This work addresses the challenges of high latency, high cost, and resource constraints in deploying large language models (LLMs) across cloud-edge-device协同 environments. The authors propose ConsRoute, a novel framework that introduces fine-grained semantic consistency as a routing supervision signal for the first time. By reusing hidden states from the LLM’s prefilling phase to construct lightweight query representations, ConsRoute dynamically learns adaptive routing thresholds through clustering and Bayesian optimization, jointly optimizing response quality, latency, and cost. Notably, the method requires no additional encoder and achieves over 95% of cloud-level performance while reducing end-to-end latency and inference cost by nearly 40%, significantly outperforming existing routing strategies.
📝 Abstract
Large language models (LLMs) deliver impressive capabilities but incur substantial inference latency and cost, which hinders their deployment in latency-sensitive and resource-constrained scenarios. Cloud-edge-device collaborative inference has emerged as a promising paradigm by dynamically routing queries to models of different capacities across tiers. In this paper, we propose ConsRoute, a lightweight, semantic-aware, and adaptive routing framework that significantly improves inference efficiency while minimizing impact on response quality. Unlike prior routing methods that rely on predicting coarse-grained output quality gaps, ConsRoute leverages a reranker to directly assess the semantic consistency between responses generated by models at different tiers, yielding fine-grained soft supervision signals for routing. To minimize device-side overhead, ConsRoute reuses hidden states from the LLM prefilling stage as compact query representations, avoiding additional encoders or inference passes. Furthermore, these representations are clustered, and Bayesian optimization is employed to learn cluster-specific routing thresholds that dynamically balance quality, latency, and cost under heterogeneous query distributions. Extensive experiments demonstrate that ConsRoute achieves near-cloud performance (>=95%) while reducing end-to-end latency and inference cost by nearly 40%, consistently outperforming existing routing baselines in both response quality and system efficiency.