Optimizing RAG Rerankers with LLM Feedback via Reinforcement Learning

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misalignment between traditional reranking models—which rely on static human annotations—and the dynamic generation process of large language models (LLMs), often yielding highly relevant but generation-ineffective retrieval results. To bridge this gap, the authors propose RRPO, a novel framework that formulates reranking as a sequential decision-making process and directly optimizes for LLM generation quality via reinforcement learning, using LLM feedback in an end-to-end manner without requiring human labels. RRPO introduces a reference-anchored deterministic baseline to enhance training stability. Experiments demonstrate that RRPO significantly outperforms strong baselines such as RankZephyr on knowledge-intensive tasks, exhibits robustness to noisy supervision, generalizes across diverse LLMs, and seamlessly integrates with query expansion modules.
📝 Abstract
Rerankers play a pivotal role in refining retrieval results for Retrieval-Augmented Generation. However, current reranking models are typically optimized on static human annotated relevance labels in isolation, decoupled from the downstream generation process. This isolation leads to a fundamental misalignment: documents identified as topically relevant by information retrieval metrics often fail to provide the actual utility required by the LLM for precise answer generation. To bridge this gap, we introduce ReRanking Preference Optimization (RRPO), a reinforcement learning framework that directly aligns reranking with the LLM's generation quality. By formulating reranking as a sequential decision-making process, RRPO optimizes for context utility using LLM feedback, thereby eliminating the need for expensive human annotations. To ensure training stability, we further introduce a reference-anchored deterministic baseline. Extensive experiments on knowledge-intensive benchmarks demonstrate that RRPO significantly outperforms strong baselines, including the powerful list-wise reranker RankZephyr. Further analysis highlights the versatility of our framework: it generalizes seamlessly to diverse readers (e.g., GPT-4o), integrates orthogonally with query expansion modules like Query2Doc, and remains robust even when trained with noisy supervisors.
Problem

Research questions and friction points this paper is trying to address.

Reranking
Retrieval-Augmented Generation
LLM Feedback
Reinforcement Learning
Generation-Utility Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Retrieval-Augmented Generation
Reranking
LLM Feedback
Preference Optimization