π€ AI Summary
This work addresses the vulnerability of large language models (LLMs) in retrieval-augmented generation (RAG) to retrieval noise, which often leads to overconfident predictions when presented with contradictory or irrelevant context. The study systematically uncovers, for the first time, the mechanism by which such noise affects model confidence and introduces NAACLβa novel, endogenous calibration framework that operates without requiring a stronger teacher model. NAACL constructs a 2K-scale synthetic dataset using noise-aware rules and leverages supervised fine-tuning to enable the model to intrinsically recognize and calibrate for noise-induced uncertainty. Experimental results demonstrate that this approach reduces Expected Calibration Error (ECE) by 10.9% on in-domain data and by 8.0% on out-of-domain data, substantially enhancing the reliability of model confidence under noisy retrieval conditions.
π Abstract
Accurately assessing model confidence is essential for deploying large language models (LLMs) in mission-critical factual domains. While retrieval-augmented generation (RAG) is widely adopted to improve grounding, confidence calibration in RAG settings remains poorly understood. We conduct a systematic study across four benchmarks, revealing that LLMs exhibit poor calibration performance due to noisy retrieved contexts. Specifically, contradictory or irrelevant evidence tends to inflate the model's false certainty, leading to severe overconfidence. To address this, we propose NAACL Rules (Noise-AwAre Confidence CaLibration Rules) to provide a principled foundation for resolving overconfidence under noise. We further design NAACL, a noise-aware calibration framework that synthesizes supervision from about 2K HotpotQA examples guided by these rules. By performing supervised fine-tuning (SFT) with this data, NAACL equips models with intrinsic noise awareness without relying on stronger teacher models. Empirical results show that NAACL yields substantial gains, improving ECE scores by 10.9% in-domain and 8.0% out-of-domain. By bridging the gap between retrieval noise and verbal calibration, NAACL paves the way for both accurate and epistemically reliable LLMs.