🤖 AI Summary
Existing reward models (RMs), trained on general-purpose preference data, struggle to accurately assess RAG outputs in terms of factual consistency, relevance, completeness, and appropriate refusal. To address this, we propose RAGferee: the first RM specifically designed for RAG evaluation. It introduces a context-aware question-answering reconstruction method to transform generic QA data into high-quality, context-driven preference pairs—constituting the first RAG-specific preference dataset. Additionally, RAGferee employs few-shot filtering and lightweight fine-tuning to enable efficient training of 7B–24B parameter models. On ContextualJudgeBench, RAGferee achieves state-of-the-art performance using only 4K samples—outperforming a 70B general-purpose RM by an absolute margin of 15.5%. Our core contributions are threefold: (1) the first RAG-specialized preference dataset; (2) a context-aware reward modeling paradigm; and (3) a lightweight, accuracy-efficient training framework.
📝 Abstract
Existing Reward Models (RMs), typically trained on general preference data, struggle in Retrieval Augmented Generation (RAG) settings, which require judging responses for faithfulness to retrieved context, relevance to the user query, appropriate refusals when context is insufficient, completeness and conciseness of information. To address the lack of publicly available RAG-centric preference datasets and specialised RMs, we introduce RAGferee, a methodology that repurposes question-answering (QA) datasets into preference pairs that prioritise groundedness over stylistic features, enabling the training of contextual RMs better suited to judging RAG responses. Using RAGferee, we curate a small preference dataset of 4K samples and fine-tune RMs ranging from 7B to 24B parameters. Our RAG-centric RMs achieve state-of-the-art performance on ContextualJudgeBench, surpassing existing 70B+ RMs trained on much larger (up to 2.4M samples) general corpora, with an absolute improvement of +15.5%.