CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation benchmarks overlook the evaluation of multi-turn iterative function reuse—termed *codeflow*—a critical capability for real-world software development. To address this gap, we introduce CodeFlowBench, the first fine-grained benchmark explicitly designed for codeflow assessment. It encompasses 5,258 Codeforces problems and employs automated dependency-tree decomposition to generate function-level subtasks with corresponding unit tests. We further propose a multi-turn reuse evaluation framework and dependency-depth-aware metrics. Experimental results reveal substantial performance degradation of mainstream LLMs in iterative settings—for instance, o1-mini’s pass@1 drops by 17%—and consistent difficulties in handling deep dependencies and structurally complex tasks. This work provides the first systematic empirical characterization of LLMs’ limitations in progressive code evolution, delivering a reproducible, diagnosable benchmark and foundational theoretical insights for code generation research.

Technology Category

Application Category

📝 Abstract
Real world development demands code that is readable, extensible, and testable by organizing the implementation into modular components and iteratively reuse pre-implemented code. We term this iterative, multi-turn process codeflow and introduce CodeFlowBench, the first benchmark designed for comprehensively evaluating LLMs' ability to perform codeflow, namely to implement new functionality by reusing existing functions over multiple turns. CodeFlowBench comprises 5258 problems drawn from Codeforces and is continuously updated via an automated pipeline that decomposes each problem into a series of function-level subproblems based on its dependency tree and each subproblem is paired with unit tests. We further propose a novel evaluation framework with tasks and metrics tailored to multi-turn code reuse to assess model performance. In experiments across various LLMs under both multi-turn and single-turn patterns. We observe models' poor performance on CodeFlowBench, with a substantial performance drop in the iterative codeflow scenario. For instance, o1-mini achieves a pass@1 of 20.8% in multi-turn pattern versus 37.8% in single-turn pattern. Further analysis shows that different models excel at different dependency depths, yet all struggle to correctly solve structurally complex problems, highlighting challenges for current LLMs to serve as code generation tools when performing codeflow. Overall, CodeFlowBench offers a comprehensive benchmark and new insights into LLM capabilities for multi-turn, iterative code generation, guiding future advances in code generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability for iterative, multi-turn code reuse
Assessing model performance on modular, testable code generation
Benchmarking LLMs on complex, dependency-based coding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for multi-turn iterative code generation
Automated pipeline decomposes problems into subproblems
Novel evaluation framework for code reuse metrics
🔎 Similar Papers
No similar papers found.