🤖 AI Summary
This work proposes a lightweight listwise reranking framework leveraging specific attention heads of large language models, addressing the limitations of existing methods that struggle to exploit global candidate information in long-context scenarios and rely on manually annotated discrete relevance labels. The proposed approach enables training without Likert-scale supervision, supports continuous relevance scoring, and enhances both efficiency and performance by either fusing contextual information or utilizing intermediate-layer attention heads. It achieves new state-of-the-art results on Wikipedia, long-form narrative datasets, and the LoCoMo benchmark for conversational understanding, demonstrating that even a compact 4B-parameter model can deliver exceptional performance.
📝 Abstract
Built upon the existing analysis of retrieval heads in large language models, we propose an alternative reranking framework that trains models to estimate passage-query relevance using the attention scores of selected heads. This approach provides a listwise solution that leverages holistic information within the entire candidate shortlist during ranking. At the same time, it naturally produces continuous relevance scores, enabling training on arbitrary retrieval datasets without requiring Likert-scale supervision. Our framework is lightweight and effective, requiring only small-scale models (e.g., 4B parameters) to achieve strong performance. Extensive experiments demonstrate that our method outperforms existing state-of-the-art pointwise and listwise rerankers across multiple domains, including Wikipedia and long narrative datasets. It further establishes a new state-of-the-art on the LoCoMo benchmark that assesses the capabilities of dialogue understanding and memory usage. We further demonstrate that our framework supports flexible extensions. For example, augmenting candidate passages with contextual information further improves ranking accuracy, while training attention heads from middle layers enhances efficiency without sacrificing performance.