🤖 AI Summary
Existing benchmarks for large language models fail to capture the practical demands of high-energy physics and high-performance computing, such as scientific constraints, complex dependencies, and performance-critical requirements. To address this gap, this work proposes the first reproducible, automated, and multidimensional evaluation framework tailored to this domain, encompassing three core tasks: Doxygen-style documentation generation, end-to-end GPU kernel synthesis, and vision-enhanced scientific data analysis. By establishing a unified evaluation protocol and an automated scoring mechanism, the framework provides reliable metrics for code correctness, integration capability, robustness, and multimodal validation. This enables fair and systematic comparison of AI programming assistants and establishes a new benchmark for AI-assisted software development in scientific computing.
📝 Abstract
Large Language Models (LLM) are increasingly used for software development, yet existing benchmarks for LLM-based coding assistance do not reflect the constraints of High Energy Physics (HEP) and High Performance Computing (HPC) software. Code correctness must respect science constraints and changes must integrate into large, performance-critical codebases with complex dependencies and build systems. The primary contribution of this paper is the development of practical, repeatable benchmarks that quantify LLM performance on HEP/HPC-relevant tasks. We introduce three evaluation tracks -- code documentation benchmarks measure the ability of an LLM to generate Doxygen-style comments, code generation benchmarks evaluate end-to-end usability on representative GPU kernels, and graphical data analysis benchmarks evaluate vision-enabled LLMs. These benchmarks provide a unified framework for measuring progress in scientific coding assistance across documentation quality, code generation robustness, and multimodal validation analysis. By emphasizing repeatability, automated scoring, and domain-relevant failure modes, the suite enables fair comparisons of models and settings while supporting future work on methods that improve reliability for HEP/HPC software development.