FeatureBench: Benchmarking Agentic Coding for Complex Feature Development

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing agent evaluation benchmarks are largely confined to single-commit, simplistic bug fixes and fail to effectively assess end-to-end capabilities in complex software development. This work proposes the first benchmark tailored to realistic, intricate development scenarios, leveraging unit test dependency graphs to automatically extract cross-commit, multi-pull-request, function-level tasks. The resulting framework establishes an executable, verifiable, and leakage-resistant evaluation suite with continuous updates. Built from open-source repositories, the initial dataset comprises 200 challenging tasks and 3,825 executable environments. Experimental results reveal a stark performance gap: while state-of-the-art models achieve a 74.4% resolution rate on SWE-bench, their success drops to merely 11.0% on our benchmark, underscoring their significant limitations in handling complex, real-world development tasks.

Technology Category

Application Category

📝 Abstract
Agents powered by large language models (LLMs) are increasingly adopted in the software industry, contributing code as collaborators or even autonomous developers. As their presence grows, it becomes important to assess the current boundaries of their coding abilities. Existing agentic coding benchmarks, however, cover a limited task scope, e.g., bug fixing within a single pull request (PR), and often rely on non-executable evaluations or lack an automated approach for continually updating the evaluation coverage. To address such issues, we propose FeatureBench, a benchmark designed to evaluate agentic coding performance in end-to-end, feature-oriented software development. FeatureBench incorporates an execution-based evaluation protocol and a scalable test-driven method that automatically derives tasks from code repositories with minimal human effort. By tracing from unit tests along a dependency graph, our approach can identify feature-level coding tasks spanning multiple commits and PRs scattered across the development timeline, while ensuring the proper functioning of other features after the separation. Using this framework, we curated 200 challenging evaluation tasks and 3825 executable environments from 24 open-source repositories in the first version of our benchmark. Empirical evaluation reveals that the state-of-the-art agentic model, such as Claude 4.5 Opus, which achieves a 74.4% resolved rate on SWE-bench, succeeds on only 11.0% of tasks, opening new opportunities for advancing agentic coding. Moreover, benefiting from our automated task collection toolkit, FeatureBench can be easily scaled and updated over time to mitigate data leakage. The inherent verifiability of constructed environments also makes our method potentially valuable for agent training.
Problem

Research questions and friction points this paper is trying to address.

agentic coding
benchmark
feature development
execution-based evaluation
automated task collection
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic coding
feature-oriented development
execution-based evaluation
automated task generation
dependency-aware testing
Q
Qixing Zhou
Institute of Automation, Chinese Academy of Sciences
J
Jiacheng Zhang
Institute of Automation, Chinese Academy of Sciences
H
Haiyang Wang
Huawei Technologies Co., Ltd
R
Rui Hao
Institute of Automation, Chinese Academy of Sciences
J
Jiahe Wang
Institute of Automation, Chinese Academy of Sciences
M
Minghao Han
Institute of Automation, Chinese Academy of Sciences
Y
Yuxue Yang
Institute of Automation, Chinese Academy of Sciences
Shuzhe Wu
Shuzhe Wu
Institute of Computing Technology, Chinese Academy of Sciences
Computer VisionMachine Learning
Feiyang Pan
Feiyang Pan
Institute of Computing Technology, Chinese Academy of Sciences
Reinforcement Learning
L
Lue Fan
Institute of Automation, Chinese Academy of Sciences
D
Dandan Tu
Huawei Technologies Co., Ltd
Zhaoxiang Zhang
Zhaoxiang Zhang
Institute of Automation, Chinese Academy of Sciences
Computer VisionPattern RecognitionBiologically-inspired Learning