Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG methods exhibit limited performance on complex reasoning benchmarks—including MMLU, MMLU Pro, GPQA, and MATH—primarily due to the lack of a web-scale retrieval corpus that is broad in coverage, high in quality, and appropriately sized. To address this, we propose CompactDS: a compact, high-quality, and diverse web-page-level data store. It achieves semantic coverage preservation while retaining only ~1% of raw web data, via rigorous content filtering and deduplication. CompactDS synergistically combines in-memory approximate nearest neighbor (ANN) search with disk-based exact retrieval, enabling sub-second latency and high recall on a single node. Its minimalist RAG pipeline requires no complex agents or fine-tuning. Evaluated on multiple challenging benchmarks, CompactDS yields relative accuracy improvements of 10–33%, significantly outperforming both Google Search and state-of-the-art agent-based RAG systems.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented Generation (RAG) has primarily been studied in limited settings, such as factoid question answering; more challenging, reasoning-intensive benchmarks have seen limited success from minimal RAG. In this work, we challenge this prevailing view on established, reasoning-intensive benchmarks: MMLU, MMLU Pro, AGI Eval, GPQA, and MATH. We identify a key missing component in prior work: a usable, web-scale datastore aligned with the breadth of pretraining data. To this end, we introduce CompactDS: a diverse, high-quality, web-scale datastore that achieves high retrieval accuracy and subsecond latency on a single-node. The key insights are (1) most web content can be filtered out without sacrificing coverage, and a compact, high-quality subset is sufficient; and (2) combining in-memory approximate nearest neighbor (ANN) retrieval and on-disk exact search balances speed and recall. Using CompactDS, we show that a minimal RAG pipeline achieves consistent accuracy improvements across all benchmarks and model sizes (8B--70B), with relative gains of 10% on MMLU, 33% on MMLU Pro, 14% on GPQA, and 19% on MATH. No single data source suffices alone, highlighting the importance of diversity of sources (web crawls, curated math, academic papers, textbooks). Finally, we show that our carefully designed in-house datastore matches or outperforms web search engines such as Google Search, as well as recently proposed, complex agent-based RAG systems--all while maintaining simplicity, reproducibility, and self-containment. We release CompactDS and our retrieval pipeline, supporting future research exploring retrieval-based AI systems.
Problem

Research questions and friction points this paper is trying to address.

Improving retrieval-augmented generation for reasoning-intensive benchmarks
Creating a compact, diverse, web-scale datastore for better retrieval
Balancing speed and recall in retrieval with in-memory and on-disk methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

CompactDS: diverse, high-quality, web-scale datastore
Combines in-memory ANN and on-disk exact search
Minimal RAG pipeline improves accuracy across benchmarks
🔎 Similar Papers
No similar papers found.