Tool-integrated Reinforcement Learning for Repo Deep Search

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the semantic gap between natural language problem descriptions and fault code, which impedes multi-hop dependency reasoning, this paper proposes ToolTrain—a two-stage tool-integrated training framework. ToolTrain unifies supervised fine-tuning, rejection sampling, and tool-augmented reinforcement learning to explicitly model the invocation logic of code repository retrieval tools and multi-step reasoning paths. Crucially, it incorporates external retrieval tools into the large language model’s training loop—an innovation that enhances the model’s navigation and localization capabilities within complex code structures. Experiments demonstrate that a 32B-parameter model trained with ToolTrain achieves state-of-the-art performance on function-level fault localization, outperforming Claude-3.7. Moreover, it significantly improves end-to-end automated program repair success rates.

Technology Category

Application Category

📝 Abstract
Issue localization, the process of identifying code locations that need modification to resolve software issues, is a critical yet challenging task in software development. The semantic gap between natural language issue descriptions and faulty code requires complex multi-hop reasoning through code dependencies. Existing LLM-based agents attempt to address this by integrating repository retrieval tools. However, this transforms issue localization into a demanding task we call Repo Deep Search, which requires the LLM to effectively utilize various repository retrieval tools throughout a multi-step reasoning and navigation process. To tackle this challenge, we present ToolTrain, a two-stage tool-integrated training framework combining rejection-sampled supervised fine-tuning and tool-integrated reinforcement learning to enhance LLMs' ability to use retrieval tools for issue localization. Experimental results show that ToolTrain-trained models achieve state-of-the-art performance, with our 32B model even surpassing Claude-3.7 on function-level localization. The results also show that improved localization performance translates to better end-to-end issue resolution performance. This further demonstrates that training for issue localization is a viable and effective strategy for improving automated software development.
Problem

Research questions and friction points this paper is trying to address.

Identifying code locations needing modification for issue resolution
Bridging semantic gap between issue descriptions and faulty code
Enhancing LLMs' tool usage for multi-step repository navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage tool-integrated training framework
Rejection-sampled supervised fine-tuning
Tool-integrated reinforcement learning
🔎 Similar Papers
No similar papers found.
Zexiong Ma
Zexiong Ma
Peking University
AI4SELLMsCode Agent
C
Chao Peng
ByteDance
Q
Qunhong Zeng
Beijing Institute of Technology
P
Pengfei Gao
ByteDance
Y
Yanzhen Zou
Peking University
Bing Xie
Bing Xie
Peking University