🤖 AI Summary
Lightweight cross-encoders underperform large language model (LLM) teachers in paragraph re-ranking, limiting their deployment despite computational advantages. Method: This paper proposes Rank-DistiLLM, a knowledge distillation framework that jointly incorporates hard negative sampling, deep negative sampling, and listwise loss into cross-encoder distillation. It constructs a high-quality weakly supervised dataset by leveraging LLMs to generate fine-grained ranking signals—replacing costly human annotations. Contribution/Results: Rank-DistiLLM preserves the lightweight architecture of cross-encoders while substantially improving ranking quality. Experiments across multiple standard benchmarks show that distilled models match the re-ranking performance of their LLM teachers, achieve up to 173× faster inference speed, and reduce GPU memory consumption by 24×, effectively bridging the efficiency–effectiveness gap in practical retrieval systems.
📝 Abstract
Cross-encoders distilled from large language models (LLMs) are often more effective re-rankers than cross-encoders fine-tuned on manually labeled data. However, distilled models do not match the effectiveness of their teacher LLMs. We hypothesize that this effectiveness gap is due to the fact that previous work has not applied the best-suited methods for fine-tuning cross-encoders on manually labeled data (e.g., hard-negative sampling, deep sampling, and listwise loss functions). To close this gap, we create a new dataset, Rank-DistiLLM. Cross-encoders trained on Rank-DistiLLM achieve the effectiveness of LLMs while being up to 173 times faster and 24 times more memory efficient. Our code and data is available at https://github.com/webis-de/ECIR-25.