Asynchronous-Many-Task Systems: Challenges and Opportunities - Scaling an AMR Astrophysics Code on Exascale machines using Kokkos and HPX

📅 2024-12-20
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of achieving both scalability and performance portability for high-resolution adaptive mesh refinement (AMR) simulations—such as stellar mergers in astrophysics—across heterogeneous supercomputing architectures (GPU/x86/ARM/RISC-V), this paper introduces a novel multi-task programming paradigm integrating Kokkos, the HPX asynchronous runtime, and SIMD vectorization. The paradigm enables unified cross-architecture task scheduling, dynamic load balancing, and multi-backend network abstraction. It achieves superlinear weak scaling on Perlmutter (110K CPU cores + 6,880 A100 GPUs), Frontier (32K CPU cores + 2,048 MI250X GPUs), and Fugaku, attaining up to 51.37% parallel efficiency and 26% of peak HPCG performance. This work represents the first full-stack hardware demonstration of large-scale performance portability for AMR simulations, validating consistent efficiency across diverse accelerator- and CPU-based platforms.

Technology Category

Application Category

📝 Abstract
Dynamic and adaptive mesh refinement is pivotal in high-resolution, multi-physics, multi-model simulations, necessitating precise physics resolution in localized areas across expansive domains. Today's supercomputers' extreme heterogeneity presents a significant challenge for dynamically adaptive codes, highlighting the importance of achieving performance portability at scale. Our research focuses on astrophysical simulations, particularly stellar mergers, to elucidate early universe dynamics. We present Octo-Tiger, leveraging Kokkos, HPX, and SIMD for portable performance at scale in complex, massively parallel adaptive multi-physics simulations. Octo-Tiger supports diverse processors, accelerators, and network backends. Experiments demonstrate exceptional scalability across several heterogeneous supercomputers including Perlmutter, Frontier, and Fugaku, encompassing major GPU architectures and x86, ARM, and RISC-V CPUs. Parallel efficiency of 47.59% (110,080 cores and 6880 hybrid A100 GPUs) on a full-system run on Perlmutter (26% HPCG peak performance) and 51.37% (using 32,768 cores and 2,048 MI250X) on Frontier are achieved.
Problem

Research questions and friction points this paper is trying to address.

Achieving performance portability on heterogeneous exascale supercomputers
Scaling adaptive mesh refinement for multi-physics astrophysical simulations
Enabling efficient stellar merger simulations across diverse processor architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Kokkos and HPX for performance portability
Leveraging SIMD for complex adaptive multi-physics simulations
Supporting diverse processors, accelerators and network backends
🔎 Similar Papers
No similar papers found.
G
Gregor Daiß
University of Stuttgart, Stuttgart, 70569 Stuttgart, Germany
Patrick Diehl
Patrick Diehl
Los Alamos National Laboratory
Crack and fracture mechanicsPeridynamicsHPCHPXAsynchronous Many-Tasking Runtimes
J
Jiakun Yan
University of Illinois Urbana-Champaign, Champaign, IL, 61801 U.S.A.
J
John K. Holmen
Oak Ridge National Laboratory, Oak Ridge, TN, 37831 U.S.A.
R
Rahulkumar Gayatri
Lawrence Berkeley National Laboratory, Berkeley, CA 94720 U.S.A.
Christoph Junghans
Christoph Junghans
Los Alamos National Laboratory, Los Alamos, NM, 87545 U.S.A.
A
Alexander Straub
University of Stuttgart, Stuttgart, 70569 Stuttgart, Germany
J
Jeff R. Hammond
NVIDIA Helsinki Oy, Helsinki, 00180 Finland
D
Dominic C. Marcello
Louisiana State University, Baton Rouge, LA, 70803 U.S.A.
M
Miwako Tsuji
RIKEN Center for Computational Science, Kobe, 650-0047 JAPAN
D
Dirk Pfluger
University of Stuttgart, Stuttgart, 70569 Stuttgart, Germany
Hartmut Kaiser
Hartmut Kaiser
Center of Computation and Technology at Louisiana State University
C++High Performance Parallel and Distributed ComputingRuntime SystemsCompiler Technologies