On the Influence of Context Size and Model Choice in Retrieval-Augmented Generation Systems

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the interplay among context window size, base large language models (LLMs), and retrieval methods in Retrieval-Augmented Generation (RAG) systems for long-form question answering. We construct a cross-domain benchmark spanning biomedical and encyclopedic domains, employing a dual-path retrieval framework—BM25 and semantic search—and evaluate eight state-of-the-art LLMs under a rigorous long-form QA evaluation protocol. Our key findings are threefold: (1) performance plateaus or degrades beyond 15 retrieved context chunks, establishing 15 as the empirically optimal chunk count; (2) LLMs exhibit strong domain specificity, with general-purpose models showing pronounced performance divergence across domains; and (3) open-domain evidence retrieval faces fundamental scalability bottlenecks in large-scale corpora. Crucially, our optimized configuration simultaneously enhances factual accuracy and content completeness in long-answer generation.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) has emerged as an approach to augment large language models (LLMs) by reducing their reliance on static knowledge and improving answer factuality. RAG retrieves relevant context snippets and generates an answer based on them. Despite its increasing industrial adoption, systematic exploration of RAG components is lacking, particularly regarding the ideal size of provided context, and the choice of base LLM and retrieval method. To help guide development of robust RAG systems, we evaluate various context sizes, BM25 and semantic search as retrievers, and eight base LLMs. Moving away from the usual RAG evaluation with short answers, we explore the more challenging long-form question answering in two domains, where a good answer has to utilize the entire context. Our findings indicate that final QA performance improves steadily with up to 15 snippets but stagnates or declines beyond that. Finally, we show that different general-purpose LLMs excel in the biomedical domain than the encyclopedic one, and that open-domain evidence retrieval in large corpora is challenging.
Problem

Research questions and friction points this paper is trying to address.

Optimizing context size in RAG systems
Choosing effective base LLMs for RAG
Enhancing long-form QA in specific domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates context size impact
Compares retrieval methods BM25
Assesses LLMs across domains
🔎 Similar Papers
No similar papers found.