🤖 AI Summary
To address high latency and computational overhead in LLM-based pairwise re-ranking for real-time retrieval-augmented generation, this paper proposes a systematic optimization framework tailored for real-time deployment. Methodologically, it replaces large LLMs with lightweight variants, strictly constrains the candidate set size for re-ranking, applies INT4 quantization, designs a unidirectional sequential inference architecture to mitigate positional bias, and caps output length. Unlike prior approaches, this work achieves the first end-to-end real-time LLM-driven pairwise re-ranking (<0.4 s/query), reducing latency by 166× (from 61.36 s to 0.37 s) while preserving Recall@k nearly losslessly. Extensive experiments reveal that several previously overlooked yet critical design choices—particularly those governing inference architecture, quantization, and candidate pruning—exert decisive influence on the efficiency–effectiveness trade-off. The framework significantly enhances the practical deployability of LLM-based re-ranking in production environments.
📝 Abstract
Efficiently reranking documents retrieved from information retrieval (IR) pipelines to enhance overall quality of Retrieval-Augmented Generation (RAG) system remains an important yet challenging problem. Recent studies have highlighted the importance of Large Language Models (LLMs) in reranking tasks. In particular, Pairwise Reranking Prompting (PRP) has emerged as a promising plug-and-play approach due to its usability and effectiveness. However, the inherent complexity of the algorithm, coupled with the high computational demands and latency incurred due to LLMs, raises concerns about its feasibility in real-time applications. To address these challenges, this paper presents a focused study on pairwise reranking, demonstrating that carefully applied optimization methods can significantly mitigate these issues. By implementing these methods, we achieve a remarkable latency reduction of up to 166 times, from 61.36 seconds to 0.37 seconds per query, with an insignificant drop in performance measured by Recall@k. Our study highlights the importance of design choices that were previously overlooked, such as using smaller models, limiting the reranked set, using lower precision, reducing positional bias with one-directional order inference, and restricting output tokens. These optimizations make LLM-based reranking substantially more efficient and feasible for latency-sensitive, real-world deployments.