Optimizing Attention on GPUs by Exploiting GPU Architectural NUMA Effects

πŸ“… 2025-11-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multi-chip GPUs (e.g., AMD MI300X) suffer from NUMA-induced performance bottlenecks in multi-head attention computation due to poor data locality and cache inefficiency. Method: We propose a spatially aware scheduling strategy centered on Swizzled Head-first Mappingβ€”a novel technique that explicitly maps attention heads to their corresponding NUMA domains to align compute-memory affinity, thereby enhancing on-chip cache reuse and data locality. This is complemented by fine-grained memory access optimization and inter-chip parallel scheduling, overcoming the limitations of conventional unified-memory abstractions. Contribution/Results: Evaluated on state-of-the-art large language models, our approach achieves up to 50% end-to-end speedup over the best prior method. L2 cache hit rates consistently remain between 80% and 97%. The solution significantly improves training and inference scalability for foundation models on disaggregated GPU architectures.

Technology Category

Application Category

πŸ“ Abstract
The rise of disaggregated AI GPUs has exposed a critical bottleneck in large-scale attention workloads: non-uniform memory access (NUMA). As multi-chiplet designs become the norm for scaling compute capabilities, memory latency and bandwidth vary sharply across compute regions, undermining the performance of traditional GPU kernel scheduling strategies that assume uniform memory access. We identify how these NUMA effects distort locality in multi-head attention (MHA) and present Swizzled Head-first Mapping, a spatially-aware scheduling strategy that aligns attention heads with GPU NUMA domains to exploit intra-chiplet cache reuse. On AMD's MI300X architecture, our method achieves up to 50% higher performance over state-of-the-art attention algorithms using conventional scheduling techniques and sustains consistently high L2 cache hit rates of 80-97%. These results demonstrate that NUMA-aware scheduling is now fundamental to achieving full efficiency on next-generation disaggregated GPUs, offering a path forward for scalable AI training and inference.
Problem

Research questions and friction points this paper is trying to address.

Addressing NUMA-induced memory bottlenecks in large-scale GPU attention workloads
Optimizing multi-head attention performance on disaggregated GPU architectures
Improving cache utilization through NUMA-aware attention head scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatially-aware scheduling strategy for GPU NUMA domains
Swizzled Head-first Mapping aligns attention heads
Exploits intra-chiplet cache reuse for performance gains
πŸ”Ž Similar Papers
No similar papers found.
M
Mansi Choudhary
Department of ECE, Duke University, Durham, USA
K
Karthik Sangaiah
Advanced Micro Devices Inc., Santa Clara, USA
S
Sonali Singh
Advanced Micro Devices Inc., Santa Clara, USA
M
Muhammad Osama
Advanced Micro Devices Inc., Santa Clara, USA
Lisa Wu Wills
Lisa Wu Wills
Assistant Professor of Computer Science and ECE, Duke University
Computer ArchitectureAccelerator ArchitectureDatabase and Graph AnalyticsGenomics and
Ganesh Dasika
Ganesh Dasika
Unknown affiliation