🤖 AI Summary
Large language models often exhibit overconfidence when generating incorrect answers, rendering their verbalized confidence unreliable as an indicator of uncertainty. This work provides the first mechanistic explanation of this phenomenon at the neural circuit level, identifying specific MLP modules and attention heads as key drivers. Leveraging differentiable modeling and causal circuit localization, we propose a targeted intervention strategy applied during inference that significantly improves calibration performance. We demonstrate the effectiveness and generalizability of our approach across two instruction-tuned models and three diverse datasets, establishing a foundation for more reliable uncertainty quantification in large language models.
📝 Abstract
Large language models are often not just wrong, but \emph{confidently wrong}: when they produce factually incorrect answers, they tend to verbalize overly high confidence rather than signal uncertainty. Such verbalized overconfidence can mislead users and weaken confidence scores as a reliable uncertainty signal, yet its internal mechanisms remain poorly understood. We present a circuit-level mechanistic analysis of this inflated verbalized confidence in LLMs, organized around three axes: capturing verbalized confidence as a differentiable internal signal, identifying the circuits that causally inflate it, and leveraging these insights for targeted inference-time recalibration. Across two instruction-tuned LLMs on three datasets, we find that a compact set of MLP blocks and attention heads, concentrated in middle-to-late layers, consistently writes the confidence-inflation signal at the final token position. We further show that targeted inference-time interventions on these circuits substantially improve calibration. Together, our results suggest that verbalized overconfidence in LLMs is driven by identifiable internal circuits and can be mitigated through targeted intervention.