🤖 AI Summary
To address the on-chip memory bandwidth bottleneck in AI/ML streaming applications—exacerbated by growing model and data sizes, especially under data-parallel architectures (e.g., GPUs, neural accelerators) and low-data-reuse loop nests—this work proposes a scalable three-level heterogeneous on-chip memory hierarchy (local/intermediate/global), integrating ultra-wide register files and a programmable data rearranger. It further introduces customized data mapping and memory access scheduling algorithms tailored for streaming workloads such as CNNs. The key innovations are the first scalable three-level on-chip memory architecture and a dynamic data rearrangement mechanism, significantly enhancing vector processors’ adaptability to diverse data reuse patterns. Evaluated on representative CNN workloads, the design achieves up to a 2.3× improvement in memory bandwidth utilization over GPU and systolic-array baselines, along with a 41% gain in end-to-end energy efficiency and a 36% reduction in latency.
📝 Abstract
As the size of artificial intelligence and machine learning (AI/ML) models and datasets grows, the memory bandwidth becomes a critical bottleneck. The paper presents a novel extended memory hierarchy that addresses some major memory bandwidth challenges in data-parallel AI/ML applications. While data-parallel architectures like GPUs and neural network accelerators have improved power performance compared to traditional CPUs, they can still be significantly bottlenecked by their memory bandwidth, especially when the data reuse in the loop kernels is limited. Systolic arrays (SAs) and GPUs attempt to mitigate the memory bandwidth bottleneck but can still become memory bandwidth throttled when the amount of data reuse is not sufficient to confine data access mostly to the local memories near to the processing. To mitigate this, the proposed architecture introduces three levels of on-chip memory -- local, intermediate, and global -- with an ultra-wide register and data-shufflers to improve versatility and adaptivity to varying data-parallel applications. The paper explains the innovations at a conceptual level and presents a detailed description of the architecture innovations. We also map a representative data-parallel application, like a convolutional neural network (CNN), to the proposed architecture and quantify the benefits vis-a-vis GPUs and repersentative accelerators based on systolic arrays and vector processors.