🤖 AI Summary
To address high HBM access overhead and low compute unit utilization in multi-head attention (MHA) on tile-based many-core accelerators, this paper proposes FlatAttention—a hardware-software co-optimized dataflow. FlatAttention tightly couples MHA-specific dataflow mapping with on-chip network collective communication primitives to minimize off-chip data movement. On a 32×32 tile architecture, it achieves 89.3% FP16 compute utilization and reduces HBM traffic by 16×. Compared to FlashAttention-3, it delivers a 4.1× speedup. When scaled to a 1024-TFLOPS (FP16) accelerator, it improves compute utilization by 1.3× over NVIDIA H100, cuts HBM bandwidth demand by 40%, and reduces chip area by 1.8×. To the best of our knowledge, this is the first work to enable high-throughput, low-memory-access, and energy-efficient MHA execution on large-scale tile arrays.
📝 Abstract
Multi-Head Attention (MHA) is a critical computational kernel in transformer-based AI models. Emerging scalable tile-based accelerator architectures integrate increasing numbers of tightly-packed processing elements (PEs) with tensor units. MHA dataflow mapping is crucial for achieving high utilization of the available units. We propose FlatAttention, a new dataflow for MHA on tile-based many-PE accelerators, minimizing costly main memory (HBM) accesses by leveraging collective primitives integrated into the on-chip network fabric. FlatAttention achieves up to 89.3% utilization, and 4.1x performance speedup over FlashAttention-3 dataflow on tile-based accelerators whilst reducing HBM traffic by 16x. Through algorithm-architecture co-exploration, we identify an optimal configuration for a large scaled-out tile-based accelerator featuring a 32x32 tile mesh with 1024 TFLOPS @ FP16 peak performance, comparable to the state-of-the-art Nvidia H100 GPU. FlatAttention in this configuration achieves up to 1.3x higher utilization over FlashAttention-3 on the H100 GPU. Meanwhile, this tile-based accelerator configuration requires 40% less HBM bandwidth compared to the H100, enabling a 1.8x reduction in die size, estimated on the same technology node.