🤖 AI Summary
This work systematically investigates the effectiveness and limitations of Retrieval-Augmented Generation (RAG) for code generation. Addressing the core question—*when and why does retrieval improve code generation?*—the authors introduce CodeRAG-Bench, the first large-scale, multi-scenario RAG benchmark for code, covering foundational programming, open-domain, and repository-level tasks, and integrating five heterogeneous context sources: competitive programming solutions, tutorials, documentation, Stack Overflow posts, and GitHub repositories. Through comprehensive evaluation across multiple LLMs (e.g., CodeLlama, DeepSeek-Coder) and retrievers (e.g., BM25, DPR, ColBERT), the study reveals that high-quality retrieval substantially improves generation accuracy; however, current approaches suffer from critical bottlenecks—including retrieval failure under low lexical overlap and generators’ inability to effectively incorporate short, sparse retrieved contexts. CodeRAG-Bench is publicly released to serve as a community-standard evaluation platform for code-oriented RAG research.
📝 Abstract
While language models (LMs) have proven remarkably adept at generating code, many programs are challenging for LMs to generate using their parametric knowledge alone. Providing external contexts such as library documentation can facilitate generating accurate and functional code. Despite the success of retrieval-augmented generation (RAG) in various text-oriented tasks, its potential for improving code generation remains under-explored. In this work, we conduct a systematic, large-scale analysis by asking: in what scenarios can retrieval benefit code generation models? and what challenges remain? We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks, including basic programming, open-domain, and repository-level problems. We aggregate documents from five sources for models to retrieve contexts: competition solutions, online tutorials, library documentation, StackOverflow posts, and GitHub repositories. We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources. While notable gains are made in final code generation by retrieving high-quality contexts across various settings, our analysis reveals room for improvement -- current retrievers still struggle to fetch useful contexts especially with limited lexical overlap, and generators fail to improve with limited context lengths or abilities to integrate additional contexts. We hope CodeRAG-Bench serves as an effective testbed to encourage further development of advanced code-oriented RAG methods.