Rethinking LLM Parametric Knowledge as Post-retrieval Confidence for Dynamic Retrieval and Reranking

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently hallucinate when operating beyond their intrinsic knowledge boundaries; while retrieval-augmented generation (RAG) mitigates this by incorporating external knowledge, it lacks reliable mechanisms to assess whether retrieved contexts genuinely support answering a given query. Method: We propose a knowledge-boundary-aware post-retrieval filtering method that leverages LLMs’ internal hidden states—specifically modeling how retrieved contexts dynamically modulate model confidence. We design a confidence-driven dynamic retrieval (CBDR) mechanism and construct the NQ_Rerank dataset using LLM preference signals to fine-tune a re-ranker for fine-grained context selection. Contribution/Results: Our approach requires no human annotation, instead exploiting continuous hidden-state signals to infer knowledge credibility. It reduces retrieval overhead while significantly improving end-to-end accuracy and robustness of RAG systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often generate inaccurate responses (hallucinations) when faced with questions beyond their knowledge scope. Retrieval-Augmented Generation (RAG) addresses this by leveraging external knowledge, but a critical challenge remains: determining whether retrieved contexts effectively enhance the model`s ability to answer specific queries. This challenge underscores the importance of knowledge boundary awareness, which current methods-relying on discrete labels or limited signals-fail to address adequately, as they overlook the rich information in LLMs` continuous internal hidden states. To tackle this, we propose a novel post-retrieval knowledge filtering approach. First, we construct a confidence detection model based on LLMs` internal hidden states to quantify how retrieved contexts enhance the model`s confidence. Using this model, we build a preference dataset (NQ_Rerank) to fine-tune a reranker, enabling it to prioritize contexts preferred by the downstream LLM during reranking. Additionally, we introduce Confidence-Based Dynamic Retrieval (CBDR), which adaptively triggers retrieval based on the LLM`s initial confidence in the original question, reducing knowledge conflicts and improving efficiency. Experimental results demonstrate significant improvements in accuracy for context screening and end-to-end RAG performance, along with a notable reduction in retrieval costs while maintaining competitive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Detecting when retrieved contexts enhance LLM confidence
Improving knowledge boundary awareness using hidden states
Reducing retrieval costs while maintaining answer accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hidden states for confidence detection
Fine-tunes reranker with LLM preference dataset
Implements confidence-based dynamic retrieval triggering
🔎 Similar Papers
No similar papers found.
H
Haoxiang Jin
School of Computer Science and Technology, Xidian University
Ronghan Li
Ronghan Li
Xidian University
Natural language processingMachine Reading ComprehensionDialogue System
Q
Qiguang Miao
School of Computer Science and Technology, Xidian University
Z
Zixiang Lu
School of Computer Science and Technology, Xidian University