PrismRAG: Boosting RAG Factuality with Distractor Resilience and Strategized Reasoning

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
RAG systems often produce factual inaccuracies when confronted with semantically distracting passages or complex questions requiring deep reasoning. To address this, we propose a distractor-aware efficient fine-tuning framework. Our method introduces three key components: (1) distractor-aware question-answer contrastive learning pairs to explicitly model robustness against irrelevant context; (2) a prompt-based reasoning trajectory guidance mechanism that enforces structured, multi-step inference; and (3) end-to-end joint optimization that autonomously elicits planning and synthesis capabilities without handcrafted instructions. Evaluated on 12 open-domain RAG benchmarks, our approach achieves an average 5.4% improvement in factual accuracy over prior state-of-the-art methods. It establishes a scalable, low-intervention paradigm for enhancing factual consistency in RAG—requiring neither external modules nor manual annotation—while preserving efficiency and generalizability.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) often falls short when retrieved context includes confusing semi-relevant passages, or when answering questions require deep contextual understanding and reasoning. We propose an efficient fine-tuning framework, called PrismRAG, that (i) trains the model with distractor-aware QA pairs mixing gold evidence with subtle distractor passages, and (ii) instills reasoning-centric habits that make the LLM plan, rationalize, and synthesize without relying on extensive human engineered instructions. Evaluated across 12 open-book RAG QA benchmarks spanning diverse application domains and scenarios, PrismRAG improves average factuality by 5.4%, outperforming state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RAG factuality with distractor resilience
Improving deep contextual understanding and reasoning
Reducing reliance on human-engineered instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning with distractor-aware QA pairs
Instilling reasoning-centric model habits
Improving factuality without human instructions
🔎 Similar Papers
No similar papers found.
M
Mohammad Kachuee
Meta Reality Labs
T
Teja Gollapudi
Meta Reality Labs
M
Minseok Kim
Meta Reality Labs
Yin Huang
Yin Huang
Research Assistant, University of Florida
Multi-Armed BanditsEdge ComputingWireless CommunicationsQuantum Networking
K
Kai Sun
Meta Reality Labs
X
Xiao Yang
Meta Reality Labs
J
Jiaqi Wang
Meta Reality Labs
N
Nirav Shah
Meta Reality Labs
Y
Yue Liu
Meta Reality Labs
A
Aaron Colak
Meta Reality Labs
A
Anuj Kumar
Meta Reality Labs
W
Wen-tau Yih
Meta FAIR
Xin Luna Dong
Xin Luna Dong
ACM / IEEE Fellow, Principal Scientist at Meta
Knowledge graphData qualityNLPSearch