Believe Your Model: Distribution-Guided Confidence Calibration

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large reasoning models fail to effectively model the relationship between confidence distributions and accuracy in multi-candidate answer selection, leading to insufficient reliability. This work proposes DistriVoting, a novel approach that explicitly decomposes confidence distributions into positive and negative components, models them via a Gaussian mixture model, and introduces a rejection filter to reduce distributional overlap. Additionally, it designs a SelfStepConf mechanism that dynamically adjusts the reasoning process based on step-level confidence to enhance distribution separation. By integrating a distribution-guided voting scheme with a dynamic reasoning strategy, the method significantly outperforms state-of-the-art approaches across 16 models and 5 benchmarks, substantially improving both answer selection accuracy and confidence calibration.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the overlap from the perspective of distribution itself, we propose SelfStepConf, which uses step-level confidence to dynamically adjust inference process, increasing the separation between the two distributions to improve the reliability of confidences in voting. Experiments across 16 models and 5 benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

confidence calibration
distributional correlation
answer selection
reasoning models
test-time scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

confidence calibration
distribution-guided voting
Gaussian Mixture Models
step-level confidence
test-time scaling
🔎 Similar Papers
No similar papers found.