QualityFlow: An Agentic Workflow for Program Synthesis Controlled by LLM Quality Checks

📅 2025-01-20
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient correctness guarantees in program synthesis by proposing a collaborative synthesis framework integrating dynamic multi-agent workflows with an LLM-based quality checker. Methodologically, it establishes a closed-loop collaboration among code generation, test execution, and self-debugging agents, and introduces the first LLM quality checker that explicitly models program execution traces to assess test compliance in real time—enabling dynamic submission, issue clarification, and step-level backtracking—augmented by diverse prompting and quality-feedback-driven adaptive decision-making. The key contribution is the first integration of dynamic execution-aware quality verification into the synthesis pipeline, enabling fine-grained procedural control. Empirically, the approach achieves state-of-the-art performance on MBPP, HumanEval, and EvalPlus, significantly outperforming static workflows and zero-shot one-shot synthesis baselines.

Technology Category

Application Category

📝 Abstract
We introduce QualityFlow, a dynamic agentic workflow for program synthesis. Given the English description of a programming problem and a set of unit tests, the model's goal is to synthesize the correct program that solves the problem and passes the tests. QualityFlow consists of multiple large language model (LLM) agents that resemble a software development team, including code generation, testing, and self-debugging. Existing program synthesis methods face three major limitations: assumption of visible unit test conformity, bottleneck of synthesized test quality, and deviation of self-debugging trajectory. To address them, we propose the LLM Quality Checker, which explicitly"imagines"whether the synthesized programs' execution would conform to the unit tests. The Quality Checks dynamically control the workflow, including actions to submit the final answer, clarify the problem statement, and revert previous workflow steps. As a result, our Quality Checker can precisely accept any correct program, mitigate faulty synthesized tests, and prevent potential workflow deviation. The success of the Quality Checker further enables Diversified Prompting, which encourages variations in LLM responses to maximize the possibility that a correct program appears and passes the quality check. In experiments, QualityFlow establishes the state-of-the-art results on four program synthesis benchmarks: MBPP, HumanEval, and the stricter evaluations of both MBPP and HumanEval from EvalPlus. Our systematic analysis shows that the dynamic workflow controlled by LLM quality checks can outperform static workflows and single-attempt zero-shot synthesis. The Quality Checker is the center of our investigation, and we dissect its individual performance and integrated impact on the workflow accuracy, as well as other ablations experiments to justify our workflow design.
Problem

Research questions and friction points this paper is trying to address.

Synthesize correct programs from English descriptions and unit tests
Dynamic workflow controlled by LLM quality checks for accuracy
Improve program synthesis benchmarks with agentic development team
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic agentic workflow for program synthesis
LLM Quality Checker for execution conformity
State-of-the-art results on benchmarks
🔎 Similar Papers
No similar papers found.
Y
Yaojie Hu
Iowa State University
Q
Qiang Zhou
Amazon Web Services
Q
Qihong Chen
University of California, Irvine
X
Xiaopeng Li
Amazon Web Services
L
Linbo Liu
Amazon Web Services
Dejiao Zhang
Dejiao Zhang
Amazon Web Services
A
Amit Kachroo
Amazon Web Services
Talha Oz
Talha Oz
Amazon
Computational Social ScienceNetwork ScienceData ScienceOrganizational Behavior
Omer Tripp
Omer Tripp
Amazon
Program AnalysisProgramming LanguagesApplication SecurityConcurrency Control