Confidence-Calibrated Small-Large Language Model Collaboration for Cost-Efficient Reasoning

๐Ÿ“… 2026-03-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes COREA, a cascaded reasoning framework that synergistically combines small language models (SLMs) and large language models (LLMs) to balance efficiency and accuracy in complex reasoning tasks. The SLM first generates an answer along with a natural-language confidence statement; queries with low confidence are then delegated to the LLM. Crucially, the system jointly optimizes the SLMโ€™s reasoning capability and confidence calibration through reinforcement learningโ€”the first approach to integrate linguistic confidence expressions with reinforcement learning for dynamic model collaboration. Evaluated on out-of-domain mathematical and non-mathematical datasets, COREA reduces inference costs by 21.5% and 16.8%, respectively, while sacrificing no more than 2% in pass@1 accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) demonstrate superior reasoning capabilities compared to small language models (SLMs), but incur substantially higher costs. We propose COllaborative REAsoner (COREA), a system that cascades an SLM with an LLM to achieve a balance between accuracy and cost in complex reasoning tasks. COREA first attempts to answer questions using the SLM, which outputs both an answer and a verbalized confidence score. Questions with confidence below a predefined threshold are deferred to the LLM for more accurate resolution. We introduce a reinforcement learning-based training algorithm that aligns the SLM's confidence through an additional confidence calibration reward. Extensive experiments demonstrate that our method jointly improves the SLM's reasoning ability and confidence calibration across diverse datasets and model backbones. Compared to using the LLM alone, COREA reduces cost by 21.5% and 16.8% on out-of-domain math and non-math datasets, respectively, with only an absolute pass@1 drop within 2%.
Problem

Research questions and friction points this paper is trying to address.

cost-efficient reasoning
large language models
small language models
confidence calibration
reasoning accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

confidence calibration
small-large language model collaboration
cost-efficient reasoning
reinforcement learning
cascaded inference
๐Ÿ”Ž Similar Papers
No similar papers found.
Chuang Zhang
Chuang Zhang
Tsinghua University
Autonomous DrivingIntelligent Connected Vehicle
Z
Zizhen Zhu
Tsinghua University
Y
Yihao Wei
Amazon Web Services
B
Bing Tian
Amazon Web Services
J
Junyi Liu
Amazon Web Services
Henan Wang
Henan Wang
Department of Computer Science, Tsinghua University, Beijing, China
Database
X
Xavier Wang
Amazon Web Services
Yaxiao Liu
Yaxiao Liu
Amazon Web Services
Cloud Computing