🤖 AI Summary
Prior agent-based program repair approaches lack evaluation on industrial-scale, real-world defects. Method: We construct the first enterprise-grade agent repair benchmark using 178 real Google bugs (78 human-reported, 100 machine-reported), revealing substantial distributional disparities from open-source benchmarks (e.g., SWE-Bench) in language prevalence and patch size. We propose a Gemini 1.5 Pro–driven agent architecture tailored for production IDEs, integrating planning, tool invocation, and code generation. Contribution/Results: Under 20 trajectory samples, Passerine achieves 73% syntactic repair success (43% semantic equivalence) on machine-reported bugs and 25.6% (17.9% semantic equivalence) on human-reported bugs. This work establishes the first agent-based repair baseline on an industrial defect corpus, empirically validating both the feasibility and key challenges—such as report quality sensitivity and semantic correctness—of deploying agent-based repair in production environments.
📝 Abstract
Agent-based program repair offers to automatically resolve complex bugs end-to-end by combining the planning, tool use, and code generation abilities of modern LLMs. Recent work has explored the use of agent-based repair approaches on the popular open-source SWE-Bench, a collection of bugs from highly-rated GitHub Python projects. In addition, various agentic approaches such as SWE-Agent have been proposed to solve bugs in this benchmark. This paper explores the viability of using an agentic approach to address bugs in an enterprise context. To investigate this, we curate an evaluation set of 178 bugs drawn from Google's issue tracking system. This dataset spans both human-reported (78) and machine-reported bugs (100). To establish a repair performance baseline on this benchmark, we implement Passerine, an agent similar in spirit to SWE-Agent that can work within Google's development environment. We show that with 20 trajectory samples and Gemini 1.5 Pro, Passerine can produce a patch that passes bug tests (i.e., plausible) for 73% of machine-reported and 25.6% of human-reported bugs in our evaluation set. After manual examination, we found that 43% of machine-reported bugs and 17.9% of human-reported bugs have at least one patch that is semantically equivalent to the ground-truth patch. These results establish a baseline on an industrially relevant benchmark, which as we show, contains bugs drawn from a different distribution -- in terms of language diversity, size, and spread of changes, etc. -- compared to those in the popular SWE-Bench dataset.