π€ AI Summary
To address the inefficiency and inaccuracy of retrieval in long-context language models, this paper proposes QRHEADβa query-aware key attention head identification mechanism that jointly leverages query-context attention weights and task-specific ground-truth samples to select high-value retrieval heads. Based on QRHEAD, we design QR-RETRIEVER: a lightweight, zero-shot, plug-and-play retriever requiring no fine-tuning and directly applicable to long-context reasoning and re-ranking tasks. Evaluated on LongMemEval and CLIPPER, QR-RETRIEVER outperforms full-context baselines by over 10%; on BEIR zero-shot re-ranking, it significantly surpasses LLM-based re-rankers such as RankGPT; and it demonstrates strong generalization on Needle-in-a-Haystack and multi-hop reasoning benchmarks. Our core contribution is the first formulation of attention head selection as a query-driven dynamic subset identification problem, enabling efficient, general-purpose, and training-free retrieval augmentation for long-context LMs.
π Abstract
Recent work has identified retrieval heads (Wu et al., 2025b), a subset of attention heads responsible for retrieving salient information in long-context language models (LMs), as measured by their copy-paste behavior in Needle-in-a-Haystack tasks. In this paper, we introduce QRHEAD (Query-Focused Retrieval Head), an improved set of attention heads that enhance retrieval from long context. We identify QRHEAD by aggregating attention scores with respect to the input query, using a handful of examples from real-world tasks (e.g., long-context QA). We further introduce QR- RETRIEVER, an efficient and effective retriever that uses the accumulated attention mass of QRHEAD as retrieval scores. We use QR- RETRIEVER for long-context reasoning by selecting the most relevant parts with the highest retrieval scores. On multi-hop reasoning tasks LongMemEval and CLIPPER, this yields over 10% performance gains over full context and outperforms strong dense retrievers. We also evaluate QRRETRIEVER as a re-ranker on the BEIR benchmark and find that it achieves strong zero-shot performance, outperforming other LLM-based re-rankers such as RankGPT. Further analysis shows that both the querycontext attention scoring and task selection are crucial for identifying QRHEAD with strong downstream utility. Overall, our work contributes a general-purpose retriever and offers interpretability insights into the long-context capabilities of LMs.