MutaGReP: Execution-Free Repository-Grounded Plan Search for Code-Use

๐Ÿ“… 2025-02-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Efficiently retrieving task-relevant context from large codebases for LLMs remains challenging due to prohibitive token overhead and execution dependencies. Method: This paper introduces the first mutation-guided neural tree search framework, which explores executable, codebase-grounded stepwise solutions within a natural language planning spaceโ€”without requiring code execution. It integrates symbolic retrieval, neural tree search, plan mutation, and repository-aware semantic grounding. Results: On LongCodeArena, our method achieves performance on par with GPT-4o using full-repository context, while consuming only <6.4K tokens (<5% of full-context cost). It significantly improves Qwen2.5-Coder-32B/72Bโ€™s accuracy and successfully solves multiple highly challenging tasks. To our knowledge, this is the first approach enabling high-precision, low-overhead, execution-free codebase-aware planning generation.

Technology Category

Application Category

๐Ÿ“ Abstract
When a human requests an LLM to complete a coding task using functionality from a large code repository, how do we provide context from the repo to the LLM? One approach is to add the entire repo to the LLM's context window. However, most tasks involve only fraction of symbols from a repo, longer contexts are detrimental to the LLM's reasoning abilities, and context windows are not unlimited. Alternatively, we could emulate the human ability to navigate a large repo, pick out the right functionality, and form a plan to solve the task. We propose MutaGReP (Mutation-guided Grounded Repository Plan Search), an approach to search for plans that decompose a user request into natural language steps grounded in the codebase. MutaGReP performs neural tree search in plan space, exploring by mutating plans and using a symbol retriever for grounding. On the challenging LongCodeArena benchmark, our plans use less than 5% of the 128K context window for GPT-4o but rival the coding performance of GPT-4o with a context window filled with the repo. Plans produced by MutaGReP allow Qwen 2.5 Coder 32B and 72B to match the performance of GPT-4o with full repo context and enable progress on the hardest LongCodeArena tasks. Project page: zaidkhan.me/MutaGReP
Problem

Research questions and friction points this paper is trying to address.

Optimize code repository context for LLMs
Decompose coding tasks into natural language steps
Improve LLM performance with minimal context usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural tree search
Mutation-guided plan search
Symbol retriever grounding
๐Ÿ”Ž Similar Papers
2024-08-18ACM Transactions on Knowledge Discovery from DataCitations: 1