🤖 AI Summary
Existing code generation benchmarks are largely confined to single domains and programming languages, limiting their ability to assess the generalization capabilities of large models in complex industrial settings. This work proposes the first industrial-grade, multi-domain, multi-language benchmark for code generation, comprising 125 core problems and 579 subproblems drawn from real-world scenarios in finance, automation, aerospace, and other sectors, with support for MATLAB, Python, C++, Stata, and more. The dataset features detailed problem descriptions, comprehensive test cases, and an accompanying automated evaluation pipeline. Experimental results show that Claude 4.5 Opus achieves the best performance, with pass rates of 68.1% on subproblems and 42.5% on core problems. The full dataset and evaluation code will be released publicly.
📝 Abstract
Code generation and comprehension by Large Language Models (LLMs) have emerged as core drivers of industrial intelligence and decision optimization, finding widespread application in fields such as finance, automation, and aerospace. Although recent advancements have demonstrated the remarkable potential of LLMs in general code generation, existing benchmarks are mainly confined to single domains and languages. Consequently, they fail to effectively evaluate the generalization capabilities required for real-world industrial applications or to reflect the coding proficiency demanded by complex industrial scenarios. To bridge this gap, we introduce IndustryCode, the first comprehensive benchmark designed to span multiple industrial domains and programming languages. IndustryCode comprises 579 sub-problems derived from 125 primary industrial challenges, accompanied by rigorous problem descriptions and test cases. It covers a wide range of fields, including finance, automation, aerospace, and remote sensing-and incorporates diverse programming languages such as MATLAB, Python, C++, and Stata. In our evaluation, the top-performing model, Claude 4.5 Opus, achieved an overall accuracy of 68.1% on sub-problems and 42.5% main problems. The benchmark dataset and automated evaluation code will be made publicly available upon acceptance.