🤖 AI Summary
Existing retrieval models struggle to simultaneously capture context awareness, causal dependencies, and effective retrieval scope when processing long documents. This work proposes AttentionRetriever, which, for the first time, demonstrates that attention mechanisms can serve as highly effective retrievers for long documents. By integrating entity-level embeddings, the method constructs context-aware dense representations and dynamically adapts the retrieval scope in a data-driven manner. Extensive experiments show that AttentionRetriever significantly outperforms current state-of-the-art models across multiple long-document retrieval benchmarks, while maintaining inference efficiency comparable to standard dense retrievers.
📝 Abstract
Retrieval augmented generation (RAG) has been widely adopted to help Large Language Models (LLMs) to process tasks involving long documents. However, existing retrieval models are not designed for long document retrieval and fail to address several key challenges of long document retrieval, including context-awareness, causal dependence, and scope of retrieval. In this paper, we proposed AttentionRetriever, a novel long document retrieval model that leverages attention mechanism and entity-based retrieval to build context-aware embeddings for long document and determine the scope of retrieval. With extensive experiments, we found AttentionRetriever is able to outperform existing retrieval models on long document retrieval datasets by a large margin while remaining as efficient as dense retrieval models.