π€ AI Summary
Existing benchmarks struggle to evaluate large language modelsβ ability to generate complete software from scratch, as they rely on predefined scaffolding and lack end-to-end behavioral validation. This work proposes CLI-Tool-Bench, the first structure-agnostic, end-to-end benchmark for CLI tool generation, which employs black-box differential testing within a sandboxed environment to assess multi-level equivalence between model-generated and human-written tools in terms of system side effects and terminal outputs. Evaluation of seven state-of-the-art LLMs on 100 real-world CLI tools reveals a maximum success rate below 43%, with increased token consumption yielding no consistent performance gains; models also exhibit a strong tendency toward generating monolithic code. By moving beyond traditional white-box unit testing, this study establishes a new paradigm for evaluating code generation capabilities.
π Abstract
Large Language Models (LLMs) are driving a shift towards intent-driven development, where agents build complete software from scratch. However, existing benchmarks fail to assess this 0-to-1 generation capability due to two limitations: reliance on predefined scaffolds that ignore repository structure planning, and rigid white-box unit testing that lacks end-to-end behavioral validation. To bridge this gap, we introduce CLI-Tool-Bench, a structure-agnostic benchmark for evaluating the ground-up generation of Command-Line Interface (CLI) tools. It features 100 diverse real-world repositories evaluated via a black-box differential testing framework. Agent-generated software is executed in sandboxes, comparing system side effects and terminal outputs against human-written oracles using multi-tiered equivalence metrics. Evaluating seven state-of-the-art LLMs, we reveal that top models achieve under 43% success, highlighting the ongoing challenge of 0-to-1 generation. Furthermore, higher token consumption does not guarantee better performance, and agents tend to generate monolithic code.