🤖 AI Summary
To address the trade-off between cost and performance in large language model (LLM) deployment, this paper proposes a reinforcement learning–based dynamic routing framework. Departing from hand-crafted rules and static scheduling policies, the framework explicitly models both inference cost and task completion quality to formulate a cost-aware reward function, enabling end-to-end learning of adaptive routing decisions—selecting either high-performance or lightweight models per subtask. The system integrates tool invocation, fine-grained cost accounting, and scalable reward modeling, supporting multi-model collaboration and flexible external API integration. Evaluated on multiple benchmarks, our method achieves high task completion rates (≥92%) while reducing average inference cost by 37%–58%. Ablation studies further identify critical factors influencing system efficacy, including training stability of small open-source models under collaborative orchestration.
📝 Abstract
Modern LLM deployments confront a widening cost-performance spectrum: premium models deliver strong reasoning but are expensive, while lightweight models are economical yet brittle on complex tasks. Static escalation rules and keyword heuristics under-utilize this spectrum and fail to adapt across task types. We present xRouter, a tool-calling-based routing system in which a learned router can either answer directly or invoke one or more external models. The router is trained end-to-end with reinforcement learning using an explicit, cost-aware reward that encodes cost-performance trade-offs, eliminating the need for hand-engineered routing rules. Our implementation encompasses the full reinforcement learning framework, including reward and cost accounting, as well as the deployment and evaluation pipelines. Across diverse benchmarks, xRouter achieves strong cost-performance trade-offs (e.g., substantial cost reductions at comparable task completion rates), and provides empirical insights into what reliably helps learned routing and what does not, ranging from model trainability to the difficulty of eliciting sophisticated orchestration behaviors in small open models. We hope these findings and our open implementation will serve as a practical substrate for advancing learned, cost-aware LLM orchestration.