🤖 AI Summary
This paper addresses scalability challenges across three dimensions—vertical (within-node), horizontal (across nodes), and temporal (across generations of hardware)—in modern computing systems. To tackle these, it introduces a high-performance parallel programming abstraction based on distributed arrays, where parallel task scheduling is explicitly driven by data locality to maximize memory bandwidth utilization. Integrated with the STREAM benchmark, the approach is empirically evaluated at scale on CPU/GPU heterogeneous clusters, achieving sustained memory bandwidth exceeding 1 PB/s on MIT SuperCloud’s 100-node system. Experimental results reveal that real-world memory bandwidth growth over the past two decades has been substantially underestimated; moreover, the framework delivers 10–100× speedups and near-linear scalability across multiple hardware generations. The core contribution is the first systematic application of the distributed array paradigm to bandwidth-oriented, cross-architecture, and cross-generational scalable computing.
📝 Abstract
High level programming languages and GPU accelerators are powerful enablers for a wide range of applications. Achieving scalable vertical (within a compute node), horizontal (across compute nodes), and temporal (over different generations of hardware) performance while retaining productivity requires effective abstractions. Distributed arrays are one such abstraction that enables high level programming to achieve highly scalable performance. Distributed arrays achieve this performance by deriving parallelism from data locality, which naturally leads to high memory bandwidth efficiency. This paper explores distributed array performance using the STREAM memory bandwidth benchmark on a variety of hardware. Scalable performance is demonstrated within and across CPU cores, CPU nodes, and GPU nodes. Horizontal scaling across multiple nodes was linear. The hardware used spans decades and allows a direct comparison of hardware improvements for memory bandwidth over this time range; showing a 10x increase in CPU core bandwidth over 20 years, 100x increase in CPU node bandwidth over 20 years, and 5x increase in GPU node bandwidth over 5 years. Running on hundreds of MIT SuperCloud nodes simultaneously achieved a sustained bandwidth $>$1 PB/s.