🤖 AI Summary
Existing LLM-based re-ranking methods face a fundamental trade-off: pointwise supervised fine-tuning (SFT) lacks fine-grained relevance discrimination, whereas listwise approaches—though more powerful—suffer from high inference latency, hindering practical deployment. This paper proposes ERank, an efficient pointwise re-ranker that synergistically integrates SFT and reinforcement learning (RL). Its key innovation lies in a two-stage training paradigm: first, replacing binary relevance labels with generative integer scoring to enhance granularity; second, incorporating listwise reward-guided RL to endow the pointwise model with global ranking awareness. Built upon reasoning-oriented LLMs, ERank balances accuracy and efficiency. On benchmarks including BRIGHT and FollowIR, it substantially outperforms state-of-the-art methods: ERank-4B and ERank-32B achieve nDCG@10 scores of 38.7 and 40.2, respectively—the highest reported to date.
📝 Abstract
Text reranking models are a crucial component in modern systems like Retrieval-Augmented Generation, tasked with selecting the most relevant documents prior to generation. However, current Large Language Models (LLMs) powered rerankers often face a fundamental trade-off. On one hand, Supervised Fine-Tuning based pointwise methods that frame relevance as a binary classification task lack the necessary scoring discrimination, particularly for those built on reasoning LLMs. On the other hand, approaches designed for complex reasoning often employ powerful yet inefficient listwise formulations, rendering them impractical for low latency applications. To resolve this dilemma, we introduce ERank, a highly effective and efficient pointwise reranker built from a reasoning LLM that excels across diverse relevance scenarios. We propose a novel two-stage training pipeline that begins with Supervised Fine-Tuning (SFT). In this stage, we move beyond binary labels and train the model generatively to output fine grained integer scores, which significantly enhances relevance discrimination. The model is then further refined using Reinforcement Learning (RL) with a novel, listwise derived reward. This technique instills global ranking awareness into the efficient pointwise architecture. We evaluate the ERank reranker on the BRIGHT, FollowIR, TREC DL, and BEIR benchmarks, demonstrating superior effectiveness and robustness compared to existing approaches. On the reasoning-intensive BRIGHT benchmark, our ERank-4B achieves an nDCG@10 of 38.7, while a larger 32B variant reaches a state of the art nDCG@10 of 40.2.