🤖 AI Summary
To address the challenge of achieving both scalability and performance portability for high-resolution adaptive mesh refinement (AMR) simulations—such as stellar mergers in astrophysics—across heterogeneous supercomputing architectures (GPU/x86/ARM/RISC-V), this paper introduces a novel multi-task programming paradigm integrating Kokkos, the HPX asynchronous runtime, and SIMD vectorization. The paradigm enables unified cross-architecture task scheduling, dynamic load balancing, and multi-backend network abstraction. It achieves superlinear weak scaling on Perlmutter (110K CPU cores + 6,880 A100 GPUs), Frontier (32K CPU cores + 2,048 MI250X GPUs), and Fugaku, attaining up to 51.37% parallel efficiency and 26% of peak HPCG performance. This work represents the first full-stack hardware demonstration of large-scale performance portability for AMR simulations, validating consistent efficiency across diverse accelerator- and CPU-based platforms.
📝 Abstract
Dynamic and adaptive mesh refinement is pivotal in high-resolution, multi-physics, multi-model simulations, necessitating precise physics resolution in localized areas across expansive domains. Today's supercomputers' extreme heterogeneity presents a significant challenge for dynamically adaptive codes, highlighting the importance of achieving performance portability at scale. Our research focuses on astrophysical simulations, particularly stellar mergers, to elucidate early universe dynamics. We present Octo-Tiger, leveraging Kokkos, HPX, and SIMD for portable performance at scale in complex, massively parallel adaptive multi-physics simulations. Octo-Tiger supports diverse processors, accelerators, and network backends. Experiments demonstrate exceptional scalability across several heterogeneous supercomputers including Perlmutter, Frontier, and Fugaku, encompassing major GPU architectures and x86, ARM, and RISC-V CPUs. Parallel efficiency of 47.59% (110,080 cores and 6880 hybrid A100 GPUs) on a full-system run on Perlmutter (26% HPCG peak performance) and 51.37% (using 32,768 cores and 2,048 MI250X) on Frontier are achieved.