🤖 AI Summary
This work addresses the limitations of large language models (LLMs) in warehouse-scale code completion, where cross-file dependencies and constrained context windows hinder performance, while existing retrieval-augmented approaches based on semantic indexing or graph structures incur high computational overhead. The paper presents the first systematic investigation into lightweight, index-free lexical retrieval—specifically using tools like ripgrep—for this task, introducing GrepRAG. The method first employs an LLM to automatically generate ripgrep queries (Naive GrepRAG), then enhances retrieval quality through identifier-weighted ranking and a structure-aware deduplication mechanism. Evaluated on CrossCodeEval and RepoEval-Updated, GrepRAG significantly outperforms current state-of-the-art methods, achieving relative improvements of 7.04%–15.58% in exact code match accuracy, thereby demonstrating the effectiveness of efficient lexical retrieval for code completion.
📝 Abstract
Repository-level code completion remains challenging for large language models (LLMs) due to cross-file dependencies and limited context windows. Prior work addresses this challenge using Retrieval-Augmented Generation (RAG) frameworks based on semantic indexing or structure-aware graph analysis, but these approaches incur substantial computational overhead for index construction and maintenance. Motivated by common developer workflows that rely on lightweight search utilities (e.g., ripgrep), we revisit a fundamental yet underexplored question: how far can simple, index-free lexical retrieval support repository-level code completion before more complex retrieval mechanisms become necessary? To answer this question, we systematically investigate lightweight, index-free, intent-aware lexical retrieval through extensive empirical analysis. We first introduce Naive GrepRAG, a baseline framework in which LLMs autonomously generate ripgrep commands to retrieve relevant context. Despite its simplicity, Naive GrepRAG achieves performance comparable to sophisticated graph-based baselines. Further analysis shows that its effectiveness stems from retrieving lexically precise code fragments that are spatially closer to the completion site. We also identify key limitations of lexical retrieval, including sensitivity to noisy matches from high-frequency ambiguous keywords and context fragmentation caused by rigid truncation boundaries. To address these issues, we propose GrepRAG, which augments lexical retrieval with a lightweight post-processing pipeline featuring identifier-weighted re-ranking and structure-aware deduplication. Extensive evaluation on CrossCodeEval and RepoEval-Updated demonstrates that GrepRAG consistently outperforms state-of-the-art (SOTA) methods, achieving 7.04-15.58 percent relative improvement in code exact match (EM) over the best baseline on CrossCodeEval.