🤖 AI Summary
Current large reasoning models fail to effectively model the relationship between confidence distributions and accuracy in multi-candidate answer selection, leading to insufficient reliability. This work proposes DistriVoting, a novel approach that explicitly decomposes confidence distributions into positive and negative components, models them via a Gaussian mixture model, and introduces a rejection filter to reduce distributional overlap. Additionally, it designs a SelfStepConf mechanism that dynamically adjusts the reasoning process based on step-level confidence to enhance distribution separation. By integrating a distribution-guided voting scheme with a dynamic reasoning strategy, the method significantly outperforms state-of-the-art approaches across 16 models and 5 benchmarks, substantially improving both answer selection accuracy and confidence calibration.
📝 Abstract
Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the overlap from the perspective of distribution itself, we propose SelfStepConf, which uses step-level confidence to dynamically adjust inference process, increasing the separation between the two distributions to improve the reliability of confidences in voting. Experiments across 16 models and 5 benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches.