Fully Autonomous Programming using Iterative Multi-Agent Debugging with Large Language Models

📅 2025-02-26
🏛️ ACM Transactions on Evolutionary Learning and Optimization
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) for program synthesis frequently suffer from the “near-hit syndrome”—generating semantically plausible yet syntactically or logically flawed code that fails unit tests due to subtle errors. Method: We propose SEIDR, a multi-agent framework establishing a closed-loop workflow of Synthesis–Execution–Instruction–Debugging–Repair. It introduces a novel hybrid debugging strategy integrating replacement, repair, and generation, and designs a multi-round candidate program ranking mechanism—leveraging lexicase/tournament selection—tailored for instruction-tuned LLMs. Contribution/Results: Implemented with GPT-3.5 and Llama 3-8B in a collaborative architecture, SEIDR solves 18 C++ and 20 Python tasks on PSB2. On HumanEval-C++, Llama 3-8B achieves pass@100 = 84.2%, successfully passing at least one attempt on 163 of 164 problems—demonstrating significant improvement over single-model debugging approaches.

Technology Category

Application Category

📝 Abstract
Program synthesis with Large Language Models (LLMs) suffers from a “near-miss syndrome”: the generated code closely resembles a correct solution but fails unit tests due to minor errors. We address this with a multi-agent framework called Synthesize, Execute, Instruct, Debug, and Repair (SEIDR). Effectively applying SEIDR to instruction-tuned LLMs requires determining (a) optimal prompts for LLMs, (b) what ranking algorithm selects the best programs in debugging rounds, and (c) balancing the repair of unsuccessful programs with the generation of new ones. We empirically explore these trade-offs by comparing replace-focused, repair-focused, and hybrid debug strategies. We also evaluate lexicase and tournament selection to rank candidates in each generation. On Program Synthesis Benchmark 2 (PSB2), our framework outperforms both conventional use of OpenAI Codex without a repair phase and traditional genetic programming approaches. SEIDR outperforms the use of an LLM alone, solving 18 problems in C++ and 20 in Python on PSB2 at least once across experiments. To assess generalizability, we employ GPT-3.5 and Llama 3 on the PSB2 and HumanEval-X benchmarks. Although SEIDR with these models does not surpass current state-of-the-art methods on the Python benchmarks, the results on HumanEval-C++ are promising. SEIDR with Llama 3-8B achieves an average pass@100 of 84.2%. Across all SEIDR runs, 163 of 164 problems are solved at least once with GPT-3.5 in HumanEval-C++, and 162 of 164 with the smaller Llama 3-8B. We conclude that SEIDR effectively overcomes the near-miss syndrome in program synthesis with LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses near-miss syndrome in LLM-generated code.
Explores optimal prompts and ranking algorithms for debugging.
Balances repair of failed programs with new code generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework SEIDR for program synthesis
Iterative debugging with LLMs to fix near-miss errors
Hybrid debug strategies outperform traditional methods
🔎 Similar Papers
No similar papers found.