🤖 AI Summary
This study addresses a critical limitation in existing retrieval-augmented generation (RAG) benchmarks: their inability to disentangle performance gains stemming from improvements in retrieval mechanisms versus those arising from enhanced document representations. To isolate the impact of document preprocessing, the authors fix the retriever—using BM25 as a consistent baseline—and systematically evaluate diverse document transcription and preprocessing strategies across multilingual and visually dense RAG tasks. Their experiments reveal that optimizing document representation alone substantially narrows the performance gap between BM25 and state-of-the-art multimodal retrievers, indicating that much of the observed gain in current systems originates from representation quality rather than retrieval algorithmic advances. Based on these findings, the work advocates for a new benchmarking paradigm that decouples document transcription from retrieval capability to enable more precise evaluation of RAG components.
📝 Abstract
Retrieval-augmented generation (RAG) is a common way to ground language models in external documents and up-to-date information. Classical retrieval systems relied on lexical methods such as BM25, which rank documents by term overlap with corpus-level weighting. End-to-end multimodal retrievers trained on large query-document datasets claim substantial improvements over these approaches, especially for multilingual documents with complex visual layouts. We demonstrate that better document representation is the primary driver of benchmark improvements. By systematically varying transcription and preprocessing methods while holding the retrieval mechanism fixed, we demonstrate that BM25 can recover large gaps on multilingual and visual benchmarks. Our findings call for decomposed evaluation benchmarks that separately measure transcription and retrieval capabilities, enabling the field to correctly attribute progress and focus effort where it matters.