ReAttn: Improving Attention-based Re-ranking via Attention Re-weighting

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing attention mechanisms in zero-shot reranking, which tend to overemphasize a few tokens and are biased toward superficial lexical overlaps between queries and documents, leading to inaccurate relevance estimation. To mitigate these issues, the authors propose a post-processing attention reweighting method that requires neither additional training nor supervision. The approach suppresses attention weights on frequently overlapping tokens via cross-document inverse document frequency (IDF) weighting and incorporates entropy-based regularization to promote a more uniform attention distribution. Experiments across multiple benchmark datasets demonstrate that this method effectively alleviates lexical bias and attention concentration, significantly improving both the accuracy and robustness of reranking, thereby confirming its efficacy and generalizability.

Technology Category

Application Category

📝 Abstract
The strong capabilities of recent Large Language Models (LLMs) have made them highly effective for zero-shot re-ranking task. Attention-based re-ranking methods, which derive relevance scores directly from attention weights, offer an efficient and interpretable alternative to generation-based re-ranking methods. However, they still face two major limitations. First, attention signals are highly concentrated a small subset of tokens within a few documents, making others indistinguishable. Second, attention often overemphasizes phrases lexically similar to the query, yielding biased rankings that irrelevant documents with mere lexical resemblance are regarded as relevant. In this paper, we propose \textbf{ReAttn}, a post-hoc re-weighting strategy for attention-based re-ranking methods. It first compute the cross-document IDF weighting to down-weight attention on query-overlapping tokens that frequently appear across the candidate documents, reducing lexical bias and emphasizing distinctive terms. It then employs entropy-based regularization to mitigate over-concentrated attention, encouraging a more balanced distribution across informative tokens. Both adjustments operate directly on existing attention weights without additional training or supervision. Extensive experiments demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

attention-based re-ranking
lexical bias
attention concentration
zero-shot re-ranking
relevance scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention re-weighting
re-ranking
lexical bias
IDF weighting
entropy regularization
🔎 Similar Papers