🤖 AI Summary
This study addresses the challenge of deploying AI technologies in high-performance computing (HPC) software development. It systematically identifies lifecycle bottlenecks hindering large language model (LLM) adoption—specifically in scientific computing semantic understanding, trustworthiness assurance, and performance portability. To overcome these, we propose a “scientific computing semantic constraint–driven” AI-augmented development paradigm, integrating program semantic modeling, domain-knowledge injection, LLM-assisted code generation, and formal verification. We further introduce the first AI-readiness benchmark tailored for HPC. The framework has been deployed in two major U.S. Department of Energy projects—Ellora and Durban—demonstrating measurable improvements in HPC software development efficiency and reliability. Our work establishes six core research directions for AI–HPC co-development and delivers both a methodological foundation and infrastructure support for building trustworthy, formally verifiable, and high-performance AI-native scientific software.
📝 Abstract
We discuss the challenges and propose research directions for using AI to revolutionize the development of high-performance computing (HPC) software. AI technologies, in particular large language models, have transformed every aspect of software development. For its part, HPC software is recognized as a highly specialized scientific field of its own. We discuss the challenges associated with leveraging state-of-the-art AI technologies to develop such a unique and niche class of software and outline our research directions in the two US Department of Energy--funded projects for advancing HPC Software via AI: Ellora and Durban.