Single-Pass Document Scanning for Question Answering

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges in long-document question answering—namely, loss of global context due to chunked embeddings and prohibitive computational overhead of full-context Transformers—this paper proposes a linear-time, single-pass scanning method that processes raw text without segmentation. Instead, it dynamically identifies sentences most relevant to the query, thereby unifying global semantic modeling with efficient inference. The core innovations are a query-conditioned attention mechanism and a context-aware sentence importance scoring scheme, both built upon the Mamba architecture. Evaluated across 41 QA benchmarks, the method consistently outperforms chunked embedding approaches and matches the performance of large language models, while reducing GPU memory consumption by 83% and accelerating inference by 5.2×.

Technology Category

Application Category

📝 Abstract
Handling extremely large documents for question answering is challenging: chunk-based embedding methods often lose track of important global context, while full-context transformers can be prohibitively expensive for hundreds of thousands of tokens. We propose a single-pass document scanning approach that processes the entire text in linear time, preserving global coherence while deciding which sentences are most relevant to the query. On 41 QA benchmarks, our single-pass scanner consistently outperforms chunk-based embedding methods and competes with large language models at a fraction of the computational cost. By conditioning on the entire preceding context without chunk breaks, the method preserves global coherence, which is especially important for long documents. Overall, single-pass document scanning offers a simple solution for question answering over massive text. All code, datasets, and model checkpoints are available at https://github.com/MambaRetriever/MambaRetriever
Problem

Research questions and friction points this paper is trying to address.

Efficiently process large documents for question answering
Preserve global context without high computational cost
Improve accuracy over chunk-based methods in QA benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-pass linear time document processing
Preserves global coherence without chunking
Outperforms chunk-based methods cost-effectively
🔎 Similar Papers
No similar papers found.
W
Weili Cao
Laboratory for Emerging Intelligence, University of California, San Diego
J
Jianyou Wang
Laboratory for Emerging Intelligence, University of California, San Diego
Y
Youze Zheng
Laboratory for Emerging Intelligence, University of California, San Diego
L
Longtian Bao
Laboratory for Emerging Intelligence, University of California, San Diego
Q
Qirui Zheng
Laboratory for Emerging Intelligence, University of California, San Diego
Taylor Berg-Kirkpatrick
Taylor Berg-Kirkpatrick
University of California San Diego
Natural Language Processing
R
R. Paturi
Laboratory for Emerging Intelligence, University of California, San Diego
Leon Bergen
Leon Bergen
Associate Professor, UCSD
Computational Linguistics