LLM Optimization Unlocks Real-Time Pairwise Reranking

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency and computational overhead in LLM-based pairwise re-ranking for real-time retrieval-augmented generation, this paper proposes a systematic optimization framework tailored for real-time deployment. Methodologically, it replaces large LLMs with lightweight variants, strictly constrains the candidate set size for re-ranking, applies INT4 quantization, designs a unidirectional sequential inference architecture to mitigate positional bias, and caps output length. Unlike prior approaches, this work achieves the first end-to-end real-time LLM-driven pairwise re-ranking (<0.4 s/query), reducing latency by 166× (from 61.36 s to 0.37 s) while preserving Recall@k nearly losslessly. Extensive experiments reveal that several previously overlooked yet critical design choices—particularly those governing inference architecture, quantization, and candidate pruning—exert decisive influence on the efficiency–effectiveness trade-off. The framework significantly enhances the practical deployability of LLM-based re-ranking in production environments.

Technology Category

Application Category

📝 Abstract
Efficiently reranking documents retrieved from information retrieval (IR) pipelines to enhance overall quality of Retrieval-Augmented Generation (RAG) system remains an important yet challenging problem. Recent studies have highlighted the importance of Large Language Models (LLMs) in reranking tasks. In particular, Pairwise Reranking Prompting (PRP) has emerged as a promising plug-and-play approach due to its usability and effectiveness. However, the inherent complexity of the algorithm, coupled with the high computational demands and latency incurred due to LLMs, raises concerns about its feasibility in real-time applications. To address these challenges, this paper presents a focused study on pairwise reranking, demonstrating that carefully applied optimization methods can significantly mitigate these issues. By implementing these methods, we achieve a remarkable latency reduction of up to 166 times, from 61.36 seconds to 0.37 seconds per query, with an insignificant drop in performance measured by Recall@k. Our study highlights the importance of design choices that were previously overlooked, such as using smaller models, limiting the reranked set, using lower precision, reducing positional bias with one-directional order inference, and restricting output tokens. These optimizations make LLM-based reranking substantially more efficient and feasible for latency-sensitive, real-world deployments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM-based pairwise reranking for real-time applications
Reducing high computational latency in document reranking systems
Making pairwise reranking feasible for latency-sensitive deployments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizing pairwise reranking with smaller models
Limiting reranked set and using lower precision
Reducing positional bias with one-directional order inference
🔎 Similar Papers
J
Jingyu Wu
AI Foundations, Capital One
A
Aditya Shrivastava
AI Foundations, Capital One
J
Jing Zhu
AI Foundations, Capital One
Alfy Samuel
Alfy Samuel
Capital One
NLPDeep LearningResponsible AI
A
Anoop Kumar
AI Foundations, Capital One
Daben Liu
Daben Liu
Capital One
Generative AINLPAutomatic Speech Recognition