The Other Side of the Coin: Exploring Fairness in Retrieval-Augmented Generation

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the impact of retrieval-augmented generation (RAG) on the fairness of large language models (LLMs), revealing for the first time that integrating RAG into small-scale models (<8B parameters) significantly exacerbates fairness disparities. To address this, we propose a dual-path mitigation framework: FairFT—a retriever-LLM fair alignment fine-tuning method leveraging contrastive learning and preference optimization—and FairFilter—a post-retrieval bias filtering mechanism combining rule-based and model-based techniques. Evaluated on multiple real-world fairness benchmarks using open-source models including LLaMA3 and Mistral, our approach achieves an average 23.6% improvement in fairness metrics while preserving 98.4% of original task performance. The core contribution lies in identifying the “fairness paradox” of RAG in small models and establishing the first end-to-end fairness optimization paradigm specifically designed for the RAG paradigm.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by retrieving relevant document from external knowledge sources. By referencing this external knowledge, RAG effectively reduces the generation of factually incorrect content and addresses hallucination issues within LLMs. Recently, there has been growing attention to improving the performance and efficiency of RAG systems from various perspectives. While these advancements have yielded significant results, the application of RAG in domains with considerable societal implications raises a critical question about fairness: What impact does the introduction of the RAG paradigm have on the fairness of LLMs? To address this question, we conduct extensive experiments by varying the LLMs, retrievers, and retrieval sources. Our experimental analysis reveals that the scale of the LLMs plays a significant role in influencing fairness outcomes within the RAG framework. When the model scale is smaller than 8B, the integration of retrieval mechanisms often exacerbates unfairness in small-scale LLMs (e.g., LLaMA3.2-1B, Mistral-7B, and LLaMA3-8B). To mitigate the fairness issues introduced by RAG for small-scale LLMs, we propose two approaches, FairFT and FairFilter. Specifically, in FairFT, we align the retriever with the LLM in terms of fairness, enabling it to retrieve documents that facilitate fairer model outputs. In FairFilter, we propose a fairness filtering mechanism to filter out biased content after retrieval. Finally, we validate our proposed approaches on real-world datasets, demonstrating their effectiveness in improving fairness while maintaining performance.
Problem

Research questions and friction points this paper is trying to address.

Investigates fairness impact of RAG on LLMs
Examines unfairness in small-scale LLMs with RAG
Proposes FairFT and FairFilter to mitigate RAG fairness issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns retriever with LLM for fairness
Filters biased content post-retrieval
Validates on real-world datasets effectively
🔎 Similar Papers
Z
Zheng Zhang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, Anhui 230027, China
N
Ning Li
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, Anhui 230027, China
Q
Qi Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, Anhui 230027, China
R
Rui Li
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, Anhui 230027, China
W
Weibo Gao
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, Anhui 230027, China
Qingyang Mao
Qingyang Mao
University of Science and Technology of China
Table ReasoningCross-domain Transfer LearningVisual Generation
Zhenya Huang
Zhenya Huang
University of Science and Technology of China
Data ScienceAIKnowledge RepresentationCognitive ReasoningIntelligent Education
Baosheng Yu
Baosheng Yu
Assistant Professor, Nanyang Technological University
Machine LearningDeep LearningComputer VisionAI for Medicine
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining