Conflict-Aware Soft Prompting for Retrieval-Augmented Generation

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address reasoning errors in retrieval-augmented generation (RAG) caused by conflicts between external retrieved context and the large language model’s (LLM’s) parametric knowledge—termed “context-memory conflict”—this paper proposes the CARE framework. CARE introduces a lightweight context evaluator that dynamically assesses the credibility of retrieved passages and a conflict-aware soft prompting mechanism: memory token embeddings encode internal knowledge, and adversarial training steers the LLM to prioritize either its verified parametric knowledge or high-confidence external evidence during conflicts. Crucially, CARE requires no architectural modification or fine-tuning of the LLM’s backbone parameters. Evaluated on question answering and fact-checking tasks, CARE achieves an average accuracy improvement of 5.0%, significantly enhancing RAG systems’ robustness and adaptability under knowledge conflicts.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge into their input prompts. However, when the retrieved context contradicts the LLM's parametric knowledge, it often fails to resolve the conflict between incorrect external context and correct parametric knowledge, known as context-memory conflict. To tackle this problem, we introduce Conflict-Aware REtrieval-Augmented Generation (CARE), consisting of a context assessor and a base LLM. The context assessor encodes compact memory token embeddings from raw context tokens. Through grounded/adversarial soft prompting, the context assessor is trained to discern unreliable context and capture a guidance signal that directs reasoning toward the more reliable knowledge source. Extensive experiments show that CARE effectively mitigates context-memory conflicts, leading to an average performance gain of 5.0% on QA and fact-checking benchmarks, establishing a promising direction for trustworthy and adaptive RAG systems.
Problem

Research questions and friction points this paper is trying to address.

Resolving context-memory conflicts in retrieval-augmented generation
Mitigating unreliable external context with parametric knowledge
Improving trustworthiness in adaptive RAG systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conflict-Aware REtrieval-Augmented Generation (CARE) system
Context assessor encodes compact memory token embeddings
Grounded/adversarial soft prompting trains context assessor
🔎 Similar Papers
No similar papers found.