🤖 AI Summary
Existing benchmarks for AI programming agents often diverge from real-world industrial scenarios in terms of language distribution, prompt style, and code structure, limiting their ability to accurately assess practical performance. This work proposes a methodology for constructing evaluation benchmarks grounded in authentic production environments and introduces ProdCodeBench—a benchmark that systematically extracts raw prompts, corresponding code changes, and associated test cases from AI programming assistant interactions within monorepos, spanning seven programming languages. To enhance evaluation reliability, the benchmark incorporates large language model–based task categorization, test relevance validation, static analysis, and multi-round execution consistency checks. Experiments on four base models demonstrate that integrating test execution with iterative verification boosts solve rates to 53.2%–72.2%, substantially improving agent performance in unfamiliar codebases.
📝 Abstract
Benchmarks that reflect production workloads are better for evaluating AI coding agents in industrial settings, yet existing benchmarks differ from real usage in programming language distribution, prompt style and codebase structure. This paper presents a methodology for curating production-derived benchmarks, illustrated through ProdCodeBench - a benchmark built from real sessions with a production AI coding assistant. We detail our data collection and curation practices including LLM-based task classification, test relevance validation, and multi-run stability checks which address challenges in constructing reliable evaluation signals from monorepo environments. Each curated sample consists of a verbatim prompt, a committed code change and fail-to-pass tests spanning seven programming languages. Our systematic analysis of four foundation models yields solve rates from 53.2% to 72.2% revealing that models making greater use of work validation tools, such as executing tests and invoking static analysis, achieve higher solve rates. This suggests that iterative verification helps achieve effective agent behavior and that exposing codebase-specific verification mechanisms may significantly improve the performance of externally trained agents operating in unfamiliar environments. We share our methodology and lessons learned to enable other organizations to construct similar production-derived benchmarks.