IndustryCode: A Benchmark for Industry Code Generation

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation benchmarks are largely confined to single domains and programming languages, limiting their ability to assess the generalization capabilities of large models in complex industrial settings. This work proposes the first industrial-grade, multi-domain, multi-language benchmark for code generation, comprising 125 core problems and 579 subproblems drawn from real-world scenarios in finance, automation, aerospace, and other sectors, with support for MATLAB, Python, C++, Stata, and more. The dataset features detailed problem descriptions, comprehensive test cases, and an accompanying automated evaluation pipeline. Experimental results show that Claude 4.5 Opus achieves the best performance, with pass rates of 68.1% on subproblems and 42.5% on core problems. The full dataset and evaluation code will be released publicly.
📝 Abstract
Code generation and comprehension by Large Language Models (LLMs) have emerged as core drivers of industrial intelligence and decision optimization, finding widespread application in fields such as finance, automation, and aerospace. Although recent advancements have demonstrated the remarkable potential of LLMs in general code generation, existing benchmarks are mainly confined to single domains and languages. Consequently, they fail to effectively evaluate the generalization capabilities required for real-world industrial applications or to reflect the coding proficiency demanded by complex industrial scenarios. To bridge this gap, we introduce IndustryCode, the first comprehensive benchmark designed to span multiple industrial domains and programming languages. IndustryCode comprises 579 sub-problems derived from 125 primary industrial challenges, accompanied by rigorous problem descriptions and test cases. It covers a wide range of fields, including finance, automation, aerospace, and remote sensing-and incorporates diverse programming languages such as MATLAB, Python, C++, and Stata. In our evaluation, the top-performing model, Claude 4.5 Opus, achieved an overall accuracy of 68.1% on sub-problems and 42.5% main problems. The benchmark dataset and automated evaluation code will be made publicly available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

code generation
large language models
industrial applications
benchmark
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

IndustryCode
code generation benchmark
multi-domain evaluation
industrial intelligence
large language models
🔎 Similar Papers
P
Puyu Zeng
Shanghai Jiao Tong University, Shanghai, China
Z
Zhaoxi Wang
Shanghai Jiao Tong University, Shanghai, China
Z
Zhixu Duan
Shanghai Jiao Tong University, Shanghai, China
L
Liang Feng
Shanghai Jiao Tong University, Shanghai, China
Shaobo Wang
Shaobo Wang
Shanghai Jiao Tong University
Large Language ModelData-Centric AIData SynthesisData SelectionExplainable AI
Cunxiang Wang
Cunxiang Wang
Tsinghua University; ZhipuAI
Large Language ModelsLLM EvaluationLLM Post-training
J
Jinghang Wang
Alibaba Group, Hangzhou, China
Bing Zhao
Bing Zhao
SRI International
Natural Language ProcessingMachine LearningOptimizations
H
Hu Wei
Alibaba Group, Hangzhou, China
Linfeng Zhang
Linfeng Zhang
DP Technology; AI for Science Institute
AI for Sciencemulti-scale modelingmolecular simulationdrug/materials design