🤖 AI Summary
Existing AI programming agents lack a reliable evaluation benchmark for repository-scale code modernization tasks, as current approaches rely on language-specific unit tests and lack deterministic ground-truth answers. This work proposes the first implementation-agnostic evaluation framework tailored for repository-level modernization, which verifies functional equivalence between source and target repositories through black-box testing and isolates test suites to prevent agent overfitting. The framework spans eight programming languages, 21 real-world repositories (up to 211K lines of code), integrating 1.6 million lines of code and 11,616 test cases. Experimental results reveal that state-of-the-art agents achieve an average pass rate of only 15.3% on projects exceeding 50K LOC, underscoring the significant challenges remaining in large-scale autonomous code modernization.
📝 Abstract
The evolution of AI coding agents has shifted the frontier from simple snippet completion to autonomous repository-level engineering. However, evaluating these agents remains ill-posed in general code repository generation, where the lack of deterministic ground truth leads to ambiguous metrics. Code modernization via automated translation offers a more rigorous alternative by providing a fixed ground truth -- the source repository; yet existing benchmarks are limited to small-scale repositories and rely on language-specific unit tests visible to the agent, allowing test-driven overfitting.
We address these limitations by introducing a benchmarking framework for repository-level code modernization built on an implementation-agnostic evaluation paradigm. This framework is instantiated through RepoMod-Bench: a benchmark of 21 real-world repositories with standardized interfaces, spanning 8 programming languages. The benchmark contains 1.6M lines of code (LOC) and 11,616 tests, with repository sizes ranging from 14 to 211K LOC. By targeting repositories with standardized interfaces, we utilize an implementation-agnostic test suite to verify functional equivalence between source and target implementations. This black-box approach ensures verification remains consistent across languages, and our environment hides all test suites from agents to prevent test-driven shortcuts. Evaluating four state-of-the-art agent configurations reveals a sharp scaling collapse: average pass rates drop from 91.3% on projects under 10K LOC to 15.3% on projects exceeding 50K LOC. These results demonstrate that autonomous modernization at scale remains a significant open challenge. Our benchmark and code are available at https://github.com/Modelcode-ai/mcode-benchmark.