🤖 AI Summary
Large language models (LLMs) exhibit miscalibrated confidence estimates, hindering reliable uncertainty quantification. Method: This work systematically investigates whether reasoning-based LLMs—specifically those employing chain-of-thought (CoT) prompting—achieve superior confidence calibration. We analyze calibration dynamics across reasoning steps and identify behavioral correlates of improved calibration. Contribution/Results: (1) CoT reasoning progressively enhances calibration, with gains increasing monotonically with step count; (2) we uncover “slow thinking” behaviors—including backtracking and multi-path exploration—as the key mechanistic driver of calibration improvement; (3) this mechanism generalizes: injecting slow-thinking cues via contextual intervention significantly boosts calibration in non-reasoning LLMs. Across 36 experimental configurations, reasoning models outperform baselines in 33 cases; ablating slow-thinking behaviors consistently degrades calibration. Our findings establish an interpretable, behavior-grounded framework for confidence modeling in trustworthy AI.
📝 Abstract
Despite their strengths, large language models (LLMs) often fail to communicate their confidence accurately, making it difficult to assess when they might be wrong and limiting their reliability. In this work, we demonstrate that reasoning models-LLMs that engage in extended chain-of-thought (CoT) reasoning-exhibit superior performance not only in problem-solving but also in accurately expressing their confidence. Specifically, we benchmark six reasoning models across six datasets and find that they achieve strictly better confidence calibration than their non-reasoning counterparts in 33 out of the 36 settings. Our detailed analysis reveals that these gains in calibration stem from the slow thinking behaviors of reasoning models-such as exploring alternative approaches and backtracking-which enable them to adjust their confidence dynamically throughout their CoT, making it progressively more accurate. In particular, we find that reasoning models become increasingly better calibrated as their CoT unfolds, a trend not observed in non-reasoning models. Moreover, removing slow thinking behaviors from the CoT leads to a significant drop in calibration. Lastly, we show that these gains are not exclusive to reasoning models-non-reasoning models also benefit when guided to perform slow thinking via in-context learning.