Confidence-Guided Stepwise Model Routing for Cost-Efficient Reasoning

πŸ“… 2025-11-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) incur high computational costs and often rely on external routing modules or expensive human-annotated data for complex reasoning tasks such as mathematical problem solving, multi-hop question answering, and planning. To address these limitations, this paper introduces STEERβ€”a novel framework that enables dynamic, stepwise collaboration between small and large models without requiring external routers or synthetic supervision. STEER leverages the confidence scores derived from the small model’s internal logits *before each reasoning step* as a domain-agnostic, fine-grained routing signal. Guided by cost-sensitive scheduling, it selectively invokes the large model only when necessary to preserve accuracy. Experiments on benchmarks including AIME demonstrate that STEER reduces FLOPs by 48% compared to full LLM inference while improving accuracy by up to 20%, significantly outperforming baselines reliant on external routing components.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in Large Language Models (LLMs) - particularly model scaling and test-time techniques - have greatly enhanced the reasoning capabilities of language models at the expense of higher inference costs. To lower inference costs, prior works train router models or deferral mechanisms that allocate easy queries to a small, efficient model, while forwarding harder queries to larger, more expensive models. However, these trained router models often lack robustness under domain shifts and require expensive data synthesis techniques such as Monte Carlo rollouts to obtain sufficient ground-truth routing labels for training. In this work, we propose Confidence-Guided Stepwise Model Routing for Cost-Efficient Reasoning (STEER), a domain-agnostic framework that performs fine-grained, step-level routing between smaller and larger LLMs without utilizing external models. STEER leverages confidence scores from the smaller model's logits prior to generating a reasoning step, so that the large model is invoked only when necessary. Extensive evaluations using different LLMs on a diverse set of challenging benchmarks across multiple domains such as Mathematical Reasoning, Multi-Hop QA, and Planning tasks indicate that STEER achieves competitive or enhanced accuracy while reducing inference costs (up to +20% accuracy with 48% less FLOPs compared to solely using the larger model on AIME), outperforming baselines that rely on trained external modules. Our results establish model-internal confidence as a robust, domain-agnostic signal for model routing, offering a scalable pathway for efficient LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference costs in large language models during reasoning tasks
Eliminating dependency on external router models vulnerable to domain shifts
Developing step-level routing using internal confidence scores without training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-level routing between small and large LLMs
Uses internal confidence scores for routing decisions
Eliminates need for external router model training