Agentic Program Repair from Test Failures at Scale: A Neuro-symbolic approach with static analysis and test execution feedback

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated test-failure-driven program repair in large-scale codebases remains challenging due to the complexity of root-cause diagnosis and reliable patch generation. Method: This paper proposes a neuro-symbolic agent framework built upon Llama, integrating ReAct-style reasoning, static analysis tools, dynamic test execution traces, and an LLM-as-a-Judge module for patch quality assessment. The agent supports multi-step planning and executes 15 fine-grained, semantics-aware code transformations. Contribution/Results: Its core innovation lies in tightly coupling symbolic program analysis with LLM-based reasoning to achieve end-to-end, interpretable, and high-precision repair. In offline evaluation on standard benchmarks, it achieves a 42.3% patch success rate (averaging 11.8 iterations per repair). During three months of online deployment, 25.5% of generated patches passed human review and were merged into the main branch—accounting for 31.5% of all human-approved repairs—significantly reducing manual effort and enhancing repair reliability.

Technology Category

Application Category

📝 Abstract
Aim: With the advent of LLMs, sophisticated agentic program repair has become viable at large organizations with large codebases. In this work, we develop an Engineering Agent that fixes the source code based on test failures at scale across diverse software offerings internally. Method: Using Llama as the base, we employ the ReAct harness to develop an agent. We start with a test failure that was triaged by a rule-based test failure bot. We then set up an agentic harness and allow the agent to reason and run a set of 15 actions from reading a file to generating a patch. We provide feedback to the agent through static analysis and test failures so it can refine its solution. We leverage an LLM-as-a-Judge to ensure that the patch conforms to the standards followed by a human review to land fixes. Benchmark Findings: We curated offline benchmarks for our patch generator, the Engineering Agent loop, and the LLM-as-a-Judge. In offline evaluations we found that a specialized 70B model is highly competitive with the much larger but vanilla Llama-405B. In an ablation study, we found that the ReAct harness (neural model) benefited from the symbolic information from static analysis tools and test execution traces. A model that strikes a balance between the solve rate and error rate vs the cost and latency has a benchmark solve rate of 42.3% using an average 11.8 feedback iterations. Production Findings: In a three month period, 80% of the generated fixes were reviewed, of which 31.5% were landed (25.5% of the total number of generated fixes). Feedback from Engineers: We used open coding to extract qualitative themes from engineers' feedback. We saw positive feedback in the form of quick approvals, gratitude, and surprise. We also found mixed feedback when the Engineering Agent's solution was partially correct and it served as a good starting point.
Problem

Research questions and friction points this paper is trying to address.

Develop an agentic program repair system for large codebases
Integrate static analysis and test feedback for patch refinement
Balance solve rate, cost, and latency in automated fixes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic approach with static analysis
ReAct harness for agentic program repair
LLM-as-a-Judge for patch validation
🔎 Similar Papers
No similar papers found.
Chandra Maddila
Chandra Maddila
Facebook
software engineeringdeveloper productivitymachine learningartificial intelligencedevops
A
Adam Tait
Meta Platform Inc, Menlo Park, California, USA
C
Claire Chang
Meta Platform Inc, Menlo Park, California, USA
Daniel Cheng
Daniel Cheng
Jet Propulsion Laboratory
remote sensingglaciology
N
Nauman Ahmad
Meta Platform Inc, Menlo Park, California, USA
Vijayaraghavan Murali
Vijayaraghavan Murali
Facebook
Programming LanguagesMachine LearningSoftware Engineering
M
Marshall Roch
Meta Platform Inc, Menlo Park, California, USA
A
Arnaud Avondet
Meta Platform Inc, Menlo Park, California, USA
A
Aaron Meltzer
Meta Platform Inc, Menlo Park, California, USA
V
Victor Montalvao
Meta Platform Inc, Menlo Park, California, USA
M
Michael Hopko
Meta Platform Inc, Menlo Park, California, USA
C
Chris Waterson
Meta Platform Inc, Menlo Park, California, USA
Parth Thakkar
Parth Thakkar
Meta
LLMs for Code
R
Renuka Fernandez
Meta Platform Inc, Menlo Park, California, USA
K
Kristian Kristensen
Meta Platform Inc, Menlo Park, California, USA
S
Sivan Barzily
Meta Platform Inc, Menlo Park, California, USA
S
Sherry Chen
Meta Platform Inc, Menlo Park, California, USA
Rui Abreu
Rui Abreu
Meta Platforms, Inc and University of Porto/INESC-ID
SWESE4AIAI4SECyberSecQuantum Software
Nachiappan Nagappan
Nachiappan Nagappan
Facebook
Software ReliabilityProductivitySoftware Analytics
P
Payam Shodjai
Meta Platform Inc, Menlo Park, California, USA
K
Killian Murphy
Meta Platform Inc, Menlo Park, California, USA
J
James Everingham
Meta Platform Inc, Menlo Park, California, USA
A
Aparna Ramani
Meta Platform Inc, Menlo Park, California, USA
P
Peter C. Rigby
Meta Platform Inc, Menlo Park, California, USA; Department of Computer Science and Software Engineering, Concordia University, Montréal, Québec, Canada