🤖 AI Summary
Power optimization for exascale supercomputing remains challenging, particularly under the heterogeneous GH200 superchip architecture. Method: This work proposes a CPU–GPU collaborative runtime dynamic power management framework, introducing a novel speed–energy–latency joint metric model and a Euclidean-distance-based multi-objective optimization scheme. It achieves, for the first time, fine-grained GPU task-level power control integrated with holistic CPU–GPU power orchestration. Contribution/Results: Evaluated on the LSMS scientific application, the method demonstrates that moderate GPU power reduction preserves computational performance while significantly improving system energy efficiency—achieving 12.7% global energy savings with only marginal latency overhead (<3.2%). This work establishes a scalable methodology and empirical foundation for adaptive energy-efficiency optimization in exascale systems.
📝 Abstract
With high-performance computing systems now running at exascale, optimizing power-scaling management and resource utilization has become more critical than ever. This paper explores runtime power-capping optimizations that leverage integrated CPU-GPU power management on architectures like the NVIDIA GH200 superchip. We evaluate energy-performance metrics that account for simultaneous CPU and GPU power-capping effects by using two complementary approaches: speedup-energy-delay and a Euclidean distance-based multi-objective optimization method. By targeting a mostly compute-bound exascale science application, the Locally Self-Consistent Multiple Scattering (LSMS), we explore challenging scenarios to identify potential opportunities for energy savings in exascale applications, and we recognize that even modest reductions in energy consumption can have significant overall impacts. Our results highlight how GPU task-specific dynamic power-cap adjustments combined with integrated CPU-GPU power steering can improve the energy utilization of certain GPU tasks, thereby laying the groundwork for future adaptive optimization strategies.