FlatAttention: Dataflow and Fabric Collectives Co-Optimization for Efficient Multi-Head Attention on Tile-Based Many-PE Accelerators

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high HBM access overhead and low compute unit utilization in multi-head attention (MHA) on tile-based many-core accelerators, this paper proposes FlatAttention—a hardware-software co-optimized dataflow. FlatAttention tightly couples MHA-specific dataflow mapping with on-chip network collective communication primitives to minimize off-chip data movement. On a 32×32 tile architecture, it achieves 89.3% FP16 compute utilization and reduces HBM traffic by 16×. Compared to FlashAttention-3, it delivers a 4.1× speedup. When scaled to a 1024-TFLOPS (FP16) accelerator, it improves compute utilization by 1.3× over NVIDIA H100, cuts HBM bandwidth demand by 40%, and reduces chip area by 1.8×. To the best of our knowledge, this is the first work to enable high-throughput, low-memory-access, and energy-efficient MHA execution on large-scale tile arrays.

Technology Category

Application Category

📝 Abstract
Multi-Head Attention (MHA) is a critical computational kernel in transformer-based AI models. Emerging scalable tile-based accelerator architectures integrate increasing numbers of tightly-packed processing elements (PEs) with tensor units. MHA dataflow mapping is crucial for achieving high utilization of the available units. We propose FlatAttention, a new dataflow for MHA on tile-based many-PE accelerators, minimizing costly main memory (HBM) accesses by leveraging collective primitives integrated into the on-chip network fabric. FlatAttention achieves up to 89.3% utilization, and 4.1x performance speedup over FlashAttention-3 dataflow on tile-based accelerators whilst reducing HBM traffic by 16x. Through algorithm-architecture co-exploration, we identify an optimal configuration for a large scaled-out tile-based accelerator featuring a 32x32 tile mesh with 1024 TFLOPS @ FP16 peak performance, comparable to the state-of-the-art Nvidia H100 GPU. FlatAttention in this configuration achieves up to 1.3x higher utilization over FlashAttention-3 on the H100 GPU. Meanwhile, this tile-based accelerator configuration requires 40% less HBM bandwidth compared to the H100, enabling a 1.8x reduction in die size, estimated on the same technology node.
Problem

Research questions and friction points this paper is trying to address.

Optimizing MHA dataflow for tile-based accelerators
Reducing HBM accesses via on-chip collective primitives
Enhancing performance and utilization over FlashAttention-3
Innovation

Methods, ideas, or system contributions that make the work stand out.

Co-optimizes dataflow and fabric collectives
Minimizes HBM accesses via on-chip primitives
Enables high PE utilization and performance
C
Chi Zhang
Integrated Systems Laboratory (IIS), ETH Zurich, Zurich, Switzerland
Luca Colagrande
Luca Colagrande
PhD student, ETH Zurich
Computer ArchitectureHigh-Performance ComputingMachine LearningIntegrated Circuits
Renzo Andri
Renzo Andri
Huawei Technologies
Computer ArchitecturesMachine LearningComputer VisionLow Power DesignASIC Design
T
Thomas Emanuel Benz
Integrated Systems Laboratory (IIS), ETH Zurich, Zurich, Switzerland
G
Gamze Islamoglu
Integrated Systems Laboratory (IIS), ETH Zurich, Zurich, Switzerland
A
Alessandro Nadalini
Department of Electrical, Electronic, and Information Engineering (DEI), University of Bologna, Bologna, Italy
Francesco Conti
Francesco Conti
Associate Professor, University of Bologna
Hardware acceleratorsDeep LearningUltra-Low Power Computing
Y
Yawei Li
Integrated Systems Laboratory (IIS), ETH Zurich, Zurich, Switzerland
Luca Benini
Luca Benini
ETH Zürich, Università di Bologna
Integrated CircuitsComputer ArchitectureEmbedded SystemsVLSIMachine Learning