🤖 AI Summary
Large language models (LLMs) exhibit poor confidence calibration in chain-of-thought (CoT) reasoning, undermining reliability and interpretability.
Method: We propose a novel supervised fine-tuning paradigm that uses only scalar confidence labels—without requiring explicit reasoning traces or reinforcement learning rewards—to elicit spontaneous self-verification behavior. Crucially, we introduce an uncertainty-calibrated rethinking mechanism at inference time.
Contribution/Results: We are the first to demonstrate that verbalized confidence annotations alone suffice to induce self-checking reasoning. On GSM8K, MATH-500, and ARC-Challenge, our approach simultaneously improves accuracy, confidence calibration, and reasoning interpretability: low-confidence questions trigger longer, more reflective self-verification responses, while high-confidence ones yield concise answers. This validates that lightweight supervision enables synergistic enhancement of reasoning robustness and controllability.
📝 Abstract
Uncertainty calibration is essential for the safe deployment of large language models (LLMs), particularly when users rely on verbalized confidence estimates. While prior work has focused on classifiers or short-form generation, confidence calibration for chain-of-thought (CoT) reasoning remains largely unexplored. Surprisingly, we find that supervised fine-tuning with scalar confidence labels alone suffices to elicit self-verification behavior of language models, without any explicit reasoning supervision or reinforcement learning-based rewards. Despite being trained only to produce a verbalized confidence score without any self-verifying examples, the model learns to generate longer and self-checking responses for low-confidence queries while providing more concise answers for high-confidence ones. We further propose a simple rethinking method that boosts performance via test-time scaling based on calibrated uncertainty. Experiments on GSM8K and held-out reasoning tasks such as MATH-500 and ARC-Challenge show that our confidence-aware fine-tuning improves both calibration and accuracy, while also enhancing interpretability by aligning the model's reasoning path with its confidence.