đ¤ AI Summary
Large language models (LLMs) for program synthesis frequently suffer from the ânear-hit syndromeââgenerating semantically plausible yet syntactically or logically flawed code that fails unit tests due to subtle errors.
Method: We propose SEIDR, a multi-agent framework establishing a closed-loop workflow of SynthesisâExecutionâInstructionâDebuggingâRepair. It introduces a novel hybrid debugging strategy integrating replacement, repair, and generation, and designs a multi-round candidate program ranking mechanismâleveraging lexicase/tournament selectionâtailored for instruction-tuned LLMs.
Contribution/Results: Implemented with GPT-3.5 and Llama 3-8B in a collaborative architecture, SEIDR solves 18 C++ and 20 Python tasks on PSB2. On HumanEval-C++, Llama 3-8B achieves pass@100 = 84.2%, successfully passing at least one attempt on 163 of 164 problemsâdemonstrating significant improvement over single-model debugging approaches.
đ Abstract
Program synthesis with Large Language Models (LLMs) suffers from a ânear-miss syndromeâ: the generated code closely resembles a correct solution but fails unit tests due to minor errors. We address this with a multi-agent framework called Synthesize, Execute, Instruct, Debug, and Repair (SEIDR). Effectively applying SEIDR to instruction-tuned LLMs requires determining (a) optimal prompts for LLMs, (b) what ranking algorithm selects the best programs in debugging rounds, and (c) balancing the repair of unsuccessful programs with the generation of new ones. We empirically explore these trade-offs by comparing replace-focused, repair-focused, and hybrid debug strategies. We also evaluate lexicase and tournament selection to rank candidates in each generation. On Program Synthesis Benchmark 2 (PSB2), our framework outperforms both conventional use of OpenAI Codex without a repair phase and traditional genetic programming approaches. SEIDR outperforms the use of an LLM alone, solving 18 problems in C++ and 20 in Python on PSB2 at least once across experiments. To assess generalizability, we employ GPT-3.5 and Llama 3 on the PSB2 and HumanEval-X benchmarks. Although SEIDR with these models does not surpass current state-of-the-art methods on the Python benchmarks, the results on HumanEval-C++ are promising. SEIDR with Llama 3-8B achieves an average pass@100 of 84.2%. Across all SEIDR runs, 163 of 164 problems are solved at least once with GPT-3.5 in HumanEval-C++, and 162 of 164 with the smaller Llama 3-8B. We conclude that SEIDR effectively overcomes the near-miss syndrome in program synthesis with LLMs.