🤖 AI Summary
To address the dual challenges in long-document question answering—namely, loss of global context due to chunked embeddings and prohibitive computational overhead of full-context Transformers—this paper proposes a linear-time, single-pass scanning method that processes raw text without segmentation. Instead, it dynamically identifies sentences most relevant to the query, thereby unifying global semantic modeling with efficient inference. The core innovations are a query-conditioned attention mechanism and a context-aware sentence importance scoring scheme, both built upon the Mamba architecture. Evaluated across 41 QA benchmarks, the method consistently outperforms chunked embedding approaches and matches the performance of large language models, while reducing GPU memory consumption by 83% and accelerating inference by 5.2×.
📝 Abstract
Handling extremely large documents for question answering is challenging: chunk-based embedding methods often lose track of important global context, while full-context transformers can be prohibitively expensive for hundreds of thousands of tokens. We propose a single-pass document scanning approach that processes the entire text in linear time, preserving global coherence while deciding which sentences are most relevant to the query. On 41 QA benchmarks, our single-pass scanner consistently outperforms chunk-based embedding methods and competes with large language models at a fraction of the computational cost. By conditioning on the entire preceding context without chunk breaks, the method preserves global coherence, which is especially important for long documents. Overall, single-pass document scanning offers a simple solution for question answering over massive text. All code, datasets, and model checkpoints are available at https://github.com/MambaRetriever/MambaRetriever