Do Retrieval Augmented Language Models Know When They Don't Know?

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic evaluation of retrieval-augmented language models’ (RALMs) “rejection capability”—their ability to identify and abstain from answering unknown questions—addressing a critical gap in hallucination research, which has largely overlooked model calibration and rejection behavior. We find that RALMs suffer from pervasive over-rejection, and discover that context fine-tuning mitigates this issue, whereas rejection-aware instruction tuning (R-tuning) exacerbates it. To address this, we propose a lightweight rejection mechanism that jointly leverages internal confidence scores and external retrieval signals, requiring no additional parameters. Our method significantly improves rejection accuracy (+12.3%) while also enhancing final answer quality (EM +4.1%). This work provides both theoretical insights and practical solutions for building more reliable, well-calibrated RALMs.

Technology Category

Application Category

📝 Abstract
Existing Large Language Models (LLMs) occasionally generate plausible yet factually incorrect responses, known as hallucinations. Researchers are primarily using two approaches to mitigate hallucinations, namely Retrieval Augmented Language Models (RALMs) and refusal post-training. However, current research predominantly emphasizes their individual effectiveness while overlooking the evaluation of the refusal capability of RALMs. In this study, we ask the fundamental question: Do RALMs know when they don't know? Specifically, we ask three questions. First, are RALMs well-calibrated regarding different internal and external knowledge states? We examine the influence of various factors. Contrary to expectations, we find that LLMs exhibit significant extbf{over-refusal} behavior. Then, how does refusal post-training affect the over-refusal issue? We investigate the Refusal-aware Instruction Tuning and In-Context Fine-tuning methods. Our results show that the over-refusal problem is mitigated by In-context fine-tuning. but magnified by R-tuning. However, we also find that the refusal ability may conflict with the quality of the answer. Finally, we develop a simple yet effective refusal method for refusal post-trained models to improve their overall answer quality in terms of refusal and correct answers. Our study provides a more comprehensive understanding of the influence of important factors on RALM systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating refusal calibration in Retrieval Augmented Language Models
Investigating over-refusal behavior in large language models
Developing methods to improve refusal capability and answer quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated RALMs refusal calibration
Tested refusal post-training methods
Developed effective refusal method
🔎 Similar Papers
No similar papers found.
Y
Youchao Zhou
Beijing Institute of Technology
H
Heyan Huang
Beijing Institute of Technology
Yicheng Liu
Yicheng Liu
Tsinghua University
Robotics
R
Rui Dai
Beijing Institute of Technology
Xinglin Wang
Xinglin Wang
Beijing institute of technology
Large Language ModelsReasoningEvaluation
Xingchen Zhang
Xingchen Zhang
Senior Lecturer and Director of the Fusion Intelligence Lab, University of Exeter
Fusion IntelligenceHuman-centered AIEmbodied AIPrivacy-preserving AIMedical AI
S
Shumin Shi
Beijing Institute of Technology
Y
Yang Deng
Singapore Management University