To Trust or Not to Trust? Enhancing Large Language Models' Situated Faithfulness to External Contexts

📅 2024-10-18
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In retrieval-augmented generation (RAG), inaccurate or misleading external context often conflicts with the model’s internal knowledge, degrading answer reliability. Method: This paper proposes a dynamic credibility modeling framework. It introduces the novel concept of *contextual credibility* and designs a dual-path reasoning mechanism—self-guided confidence reasoning (SCR) and rule-based confidence reasoning (RCR)—to adaptively balance internal knowledge against external evidence. Furthermore, it proposes CR-DPO, a preference optimization method that enhances generalization and robustness in unseen scenarios. Contribution/Results: Experiments show SCR improves performance by 24.2% over baselines on GPT-4o; CR-DPO fine-tuning of Llama-3-8B yields an average 8.9% gain across benchmarks with stable cross-task transferability. The framework establishes an interpretable, scalable paradigm for dynamic trust calibration in trustworthy RAG systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are often augmented with external contexts, such as those used in retrieval-augmented generation (RAG). However, these contexts can be inaccurate or intentionally misleading, leading to conflicts with the model's internal knowledge. We argue that robust LLMs should demonstrate situated faithfulness, dynamically calibrating their trust in external information based on their confidence in the internal knowledge and the external context to resolve knowledge conflicts. To benchmark this capability, we evaluate LLMs across several QA datasets, including a newly created dataset featuring in-the-wild incorrect contexts sourced from Reddit posts. We show that when provided with both correct and incorrect contexts, both open-source and proprietary models tend to overly rely on external information, regardless of its factual accuracy. To enhance situated faithfulness, we propose two approaches: Self-Guided Confidence Reasoning (SCR) and Rule-Based Confidence Reasoning (RCR). SCR enables models to self-assess the confidence of external information relative to their own internal knowledge to produce the most accurate answer. RCR, in contrast, extracts explicit confidence signals from the LLM and determines the final answer using predefined rules. Our results show that for LLMs with strong reasoning capabilities, such as GPT-4o and GPT-4o mini, SCR outperforms RCR, achieving improvements of up to 24.2% over a direct input augmentation baseline. Conversely, for a smaller model like Llama-3-8B, RCR outperforms SCR. Fine-tuning SCR with our proposed Confidence Reasoning Direct Preference Optimization (CR-DPO) method improves performance on both seen and unseen datasets, yielding an average improvement of 8.9% on Llama-3-8B. In addition to quantitative results, we offer insights into the relative strengths of SCR and RCR.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' trust in external contexts
Resolving conflicts between internal and external knowledge
Improving accuracy with Self-Guided and Rule-Based Confidence Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Guided Confidence Reasoning (SCR) enhances LLM accuracy.
Rule-Based Confidence Reasoning (RCR) uses predefined rules for decisions.
Confidence Reasoning Direct Preference Optimization (CR-DPO) fine-tunes SCR.
🔎 Similar Papers
No similar papers found.