🤖 AI Summary
This work addresses the inefficiency of conventional on-chip networks (NoCs) in supporting collective communication operations—such as multicast and reduction—in large-scale machine learning accelerators. To overcome this limitation, the authors propose a lightweight, collective-communication-aware NoC architecture that introduces a novel Direct Compute Access (DCA) paradigm, enabling the interconnect to directly invoke compute units for in-network reduction and synchronization. The design incurs only a 16.5% increase in router area yet achieves significant performance gains: it delivers geometric mean speedups of 2.9× for multicast and 2.5× for reduction across data sizes ranging from 1 to 32 KiB. Furthermore, when evaluated on GEMM workloads, the proposed NoC attains up to 3.8× higher performance and 1.17× better energy efficiency compared to a unicast-based baseline NoC.
📝 Abstract
The exponential increase in Machine Learning (ML) model size and complexity has driven unprecedented demand for high-performance acceleration systems. As technology scaling enables the integration of thousands of computing elements onto a single die, the boundary between distributed and on-chip systems has blurred, making efficient on-chip collective communication increasingly critical. In this work, we present a lightweight, collective-capable Network on Chip (NoC) that supports efficient barrier synchronization alongside scalable, high-bandwidth multicast and reduction operations, co-designed for the next generation of ML accelerators. We introduce Direct Compute Access (DCA), a novel paradigm that grants the interconnect fabric direct access to the cores' computational resources, enabling high-throughput in-network reductions with a small 16.5% router area overhead. Through in-network hardware acceleration, we achieve 2.9x and 2.5x geomean speedups on multicast and reduction operations involving between 1 and 32 KiB of data, respectively. Furthermore, by keeping communication off the critical path in GEMM workloads, these features allow our architecture to scale efficiently to large meshes, resulting in up to 3.8x and 2.4x estimated performance gains through multicast and reduction support, respectively, compared to a baseline unicast NoC architecture, and up to 1.17x estimated energy savings.