🤖 AI Summary
Existing LLM query routing approaches rely on fixed model sets and centralized external routers, resulting in poor flexibility and inaccurate characterization of individual model capability boundaries. Method: This paper proposes DiSRouter, a distributed self-routing framework that eliminates centralized routing by constructing a collaborative network of multiple LLM agents; each agent autonomously decides—based on its self-awareness—whether to answer or forward a query. To this end, we design a two-stage self-awareness training pipeline that explicitly enhances LLMs’ calibration of their own knowledge boundaries. Contribution/Results: Experiments demonstrate that DiSRouter significantly outperforms state-of-the-art routing methods across multiple benchmarks. It effectively discriminates query difficulty, exhibits strong generalization across domains, and scales robustly with increasing numbers of models—highlighting superior adaptability, accuracy, and extensibility.
📝 Abstract
The proliferation of Large Language Models (LLMs) has created a diverse ecosystem of models with highly varying performance and costs, necessitating effective query routing to balance performance and expense. Current routing systems often rely on a centralized external router trained on a fixed set of LLMs, making them inflexible and prone to poor performance since the small router can not fully understand the knowledge boundaries of different LLMs. We introduce DiSRouter (Distributed Self-Router), a novel paradigm that shifts from centralized control to distributed routing. In DiSRouter, a query traverses a network of LLM agents, each independently deciding whether to answer or route to other agents based on its own self-awareness, its ability to judge its competence. This distributed design offers superior flexibility, scalability, and generalizability. To enable this, we propose a two-stage Self-Awareness Training pipeline that enhances each LLM's self-awareness. Extensive experiments demonstrate that DiSRouter significantly outperforms existing routing methods in utility across various scenarios, effectively distinguishes between easy and hard queries, and shows strong generalization to out-of-domain tasks. Our work validates that leveraging an LLM's intrinsic self-awareness is more effective than external assessment, paving the way for more modular and efficient multi-agent systems.