🤖 AI Summary
This work addresses three core challenges in uncertainty quantification (UQ) for reasoning-language models (RLMs): (1) the current state of calibration, (2) how deep reasoning affects confidence calibration, and (3) whether chain-of-thought introspection enhances self-awareness. We propose *Introspective UQ*, the first systematic framework to assess RLMs’ ability to improve calibration via reflective analysis of their own reasoning traces. Built upon a reinforcement learning–driven multi-step reasoning architecture, our method integrates introspective prompting with explicit confidence self-assessment. Empirical evaluation across multiple state-of-the-art RLMs reveals: (i) pervasive overconfidence, (ii) worsening calibration with deeper reasoning, and (iii) significant calibration improvement through introspection in certain models—though efficacy varies markedly by architecture. These findings provide foundational insights for trustworthy RLM deployment and inform the development of rigorous UQ benchmarks.
📝 Abstract
Reasoning language models have set state-of-the-art (SOTA) records on many challenging benchmarks, enabled by multi-step reasoning induced using reinforcement learning. However, like previous language models, reasoning models are prone to generating confident, plausible responses that are incorrect (hallucinations). Knowing when and how much to trust these models is critical to the safe deployment of reasoning models in real-world applications. To this end, we explore uncertainty quantification of reasoning models in this work. Specifically, we ask three fundamental questions: First, are reasoning models well-calibrated? Second, does deeper reasoning improve model calibration? Finally, inspired by humans' innate ability to double-check their thought processes to verify the validity of their answers and their confidence, we ask: can reasoning models improve their calibration by explicitly reasoning about their chain-of-thought traces? We introduce introspective uncertainty quantification (UQ) to explore this direction. In extensive evaluations on SOTA reasoning models across a broad range of benchmarks, we find that reasoning models: (i) are typically overconfident, with self-verbalized confidence estimates often greater than 85% particularly for incorrect responses, (ii) become even more overconfident with deeper reasoning, and (iii) can become better calibrated through introspection (e.g., o3-Mini and DeepSeek R1) but not uniformly (e.g., Claude 3.7 Sonnet becomes more poorly calibrated). Lastly, we conclude with important research directions to design necessary UQ benchmarks and improve the calibration of reasoning models.