A Novel Approach for Automatic Program Repair using Round-Trip Translation with Large Language Models

📅 2024-01-15
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
This work investigates the feasibility and underlying mechanisms of automatic program repair (APR) using large language models (LLMs) without fine-tuning, via a code→natural language→code round-trip translation (RTT) paradigm. We propose RTT-APR—the first fine-tuning-free APR framework—whose core mechanism is leveraging the pre-trained model’s implicit prior over correct code to suppress defect-induced noise through “regression toward the mean.” We evaluate RTT-APR across eight LLMs—including GPT-3.5 and GPT-4—and four Java benchmarks, notably HumanEval-Java. GPT-4 repairs 101 out of 164 buggy programs using RTT, including 46 defects entirely missed by all existing fine-tuned LLM-based APR methods. Our results demonstrate that RTT-APR is a lightweight, effective, and mechanistically interpretable APR technique, offering a novel, parameter-efficient alternative to fine-tuning–dependent approaches.

Technology Category

Application Category

📝 Abstract
Research shows that grammatical mistakes in a sentence can be corrected by translating it to another language and back using neural machine translation with language models. We investigate whether this correction capability of Large Language Models (LLMs) extends to Automatic Program Repair (APR). Current generative models for APR are pre-trained on source code and fine-tuned for repair. This paper proposes bypassing the fine-tuning step and using Round-Trip Translation (RTT): translation of code from one programming language to another programming or natural language, and back. We hypothesize that RTT with LLMs restores the most commonly seen patterns in code during pre-training, i.e., performs a regression toward the mean, which removes bugs as they are a form of noise w.r.t. the more frequent, natural, bug-free code in the training data. To test this hypothesis, we employ eight recent LLMs pre-trained on code, including the latest GPT versions, and four common program repair benchmarks in Java. We find that RTT with English as an intermediate language repaired 101 of 164 bugs with GPT-4 on the HumanEval-Java dataset. Moreover, 46 of these are unique bugs that are not repaired by other LLMs fine-tuned for APR. Our findings highlight the viability of round-trip translation with LLMs as a technique for automated program repair and its potential for research in software engineering. Keywords: automated program repair, large language model, machine translation
Problem

Research questions and friction points this paper is trying to address.

Investigating round-trip translation for automated program repair
Testing if language models can fix bugs through code translation
Evaluating LLM-generated patches against standard APR benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using round-trip translation for program repair
Converting code between languages via LLMs
Leveraging statistical patterns to fix bugs
🔎 Similar Papers
No similar papers found.