PatchGuru: Patch Oracle Inference from Natural Language Artifacts with Large Language Models

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that software patches can introduce unintended behaviors, while existing regression testing often suffers from insufficient coverage and relies on natural language descriptions that are non-executable and thus inadequate for verifying semantic correctness. To overcome this, the authors propose a novel approach that leverages large language models to automatically synthesize executable patch specifications—termed “patch oracles”—from unstructured text in real-world pull requests. By integrating differential analysis and a self-refinement mechanism, the method iteratively improves runtime assertions to enable cross-version behavioral validation. Evaluated on 400 pull requests, the approach discovered 24 real bugs, including 12 previously unknown vulnerabilities, achieving a precision of 0.62—significantly outperforming Testora—at an average cost of just \$0.07 and 8.9 minutes per pull request.

Technology Category

Application Category

📝 Abstract
As software systems evolve, patches may unintentionally alter program behavior. Validating patches against their intended semantics is difficult due to incomplete regression tests and informal, non-executable natural language (NL) descriptions of patch intent. We present PatchGuru, the first automated technique that infers executable patch specifications from real-world pull requests (PRs). Given a PR, PatchGuru uses large language models (LLMs) to extract developer intent from NL artifacts and synthesizes patch oracles: under-approximate yet practical specifications expressed as runtime assertions in comparison programs that integrate pre- and post-patch versions. Patch oracles focus on patch-relevant behaviors, enable automated validation, and support cross-version properties. PatchGuru iteratively refines inferred oracles by comparing pre- and post-patch behaviors, identifies violations, filters inconsistencies via self-review, and generates bug reports. We evaluate PatchGuru on 400 recent PRs from four widely used open-source Python projects. PatchGuru reports 39 warnings with a precision of 0.62, yielding 24 confirmed true positives, including 12 previously unknown bugs, 11 of which were subsequently fixed by developers. Compared to the state-of-the-art technique Testora, PatchGuru detects 17 more bugs (24 vs. 7) while improving precision from 0.32 to 0.62. PatchGuru incurs an average cost of 8.9 minutes and USD 0.07 per PR. These results suggest that PatchGuru complements code review and regression testing by providing executable documentation and automated validation of patch intent.
Problem

Research questions and friction points this paper is trying to address.

patch validation
natural language artifacts
patch intent
software evolution
regression testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

patch oracle
large language models
natural language artifacts
executable specification
automated validation
🔎 Similar Papers
No similar papers found.