SSFO: Self-Supervised Faithfulness Optimization for Retrieval-Augmented Generation

πŸ“… 2025-08-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the hallucination and low faithfulness issues in retrieval-augmented generation (RAG), along with the reliance on supervised fine-tuning or high inference overhead, this paper proposes SSFOβ€”a self-supervised alignment framework. SSFO requires no human annotations; instead, it constructs preference pairs between context-conditioned and context-free generations, and integrates an enhanced DPO loss with a benign likelihood shift mechanism to explicitly model and strengthen the model’s dependence on retrieved evidence. Technically, it unifies contextual contrastive generation, self-supervised preference learning, and likelihood shift modeling. Evaluated on multilingual QA benchmarks, SSFO achieves substantial improvements in faithfulness (+12.3% on average) over existing state-of-the-art methods, while preserving instruction-following capability and cross-lingual generalization.

Technology Category

Application Category

πŸ“ Abstract
Retrieval-Augmented Generation (RAG) systems require Large Language Models (LLMs) to generate responses that are faithful to the retrieved context. However, faithfulness hallucination remains a critical challenge, as existing methods often require costly supervision and post-training or significant inference burdens. To overcome these limitations, we introduce Self-Supervised Faithfulness Optimization (SSFO), the first self-supervised alignment approach for enhancing RAG faithfulness. SSFO constructs preference data pairs by contrasting the model's outputs generated with and without the context. Leveraging Direct Preference Optimization (DPO), SSFO aligns model faithfulness without incurring labeling costs or additional inference burden. We theoretically and empirically demonstrate that SSFO leverages a benign form of emph{likelihood displacement}, transferring probability mass from parametric-based tokens to context-aligned tokens. Based on this insight, we propose a modified DPO loss function to encourage likelihood displacement. Comprehensive evaluations show that SSFO significantly outperforms existing methods, achieving state-of-the-art faithfulness on multiple context-based question-answering datasets. Notably, SSFO exhibits strong generalization, improving cross-lingual faithfulness and preserving general instruction-following capabilities. We release our code and model at the anonymous link: https://github.com/chkwy/SSFO
Problem

Research questions and friction points this paper is trying to address.

Optimizing faithfulness in retrieval-augmented generation systems
Reducing supervision costs and inference burdens in RAG
Addressing faithfulness hallucination through self-supervised alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised alignment approach for RAG faithfulness
Constructs preference pairs with and without context
Modified DPO loss to encourage likelihood displacement
πŸ”Ž Similar Papers
No similar papers found.