Training LLMs on HPC Systems: Best Practices from the OpenGPT-X Project

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing scalability and efficiency bottlenecks in large language model (LLM) training on high-performance computing (HPC) systems for multilingual European language support. Method: We conducted the first end-to-end, fully open-source training of the 7B-parameter Teuken-7B model on the JUWELS Booster supercomputer—equipped with NVIDIA A100 GPUs and the ROCm software stack—by co-designing a hardware-software training stack integrating PyTorch, DeepSpeed, Megatron-LM, and Slurm. Our approach introduces a unified distributed scheduling, memory optimization, and cross-node communication strategy tailored to heterogeneous HPC infrastructure, complemented by custom performance analysis and diagnostic tooling. Contribution/Results: The trained model achieves state-of-the-art (SOTA) performance on German, French, Spanish, and other European language benchmarks. Training throughput improves by 40% over baseline configurations. All training configurations, reproducible workflows, and engineering best practices are publicly released under open-source licenses.

Technology Category

Application Category

📝 Abstract
The training of large language models (LLMs) requires substantial computational resources, complex software stacks, and carefully designed workflows to achieve scalability and efficiency. This report presents best practices and insights gained from the OpenGPT-X project, a German initiative focused on developing open, multilingual LLMs optimized for European languages. We detail the use of high-performance computing (HPC) systems, primarily JUWELS Booster at JSC, for training Teuken-7B, a 7-billion-parameter transformer model. The report covers system architecture, training infrastructure, software choices, profiling and benchmarking tools, as well as engineering and operational challenges.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM training on HPC systems for scalability
Developing open multilingual LLMs for European languages
Addressing engineering challenges in large-scale transformer training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizing HPC systems for LLM training
Optimizing multilingual models for European languages
Implementing profiling and benchmarking tools
🔎 Similar Papers
No similar papers found.
C
Carolin Penke
Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, Germany
C
Chelsea Maria John
Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, Germany
Jan Ebert
Jan Ebert
Forschungszentrum Jülich GmbH
Computer scienceartificial intelligencemathematicsphysics
Stefan Kesselheim
Stefan Kesselheim
Jülich Supercomputing Center, Jülich Research Centre
Machine LearningComputer Simulation MethodsStatistical Mechanics
A
A. Herten
Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, Germany