From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modern spatial architectures (e.g., AMD NPUs) demand fine-grained co-design of data movement, execution ordering, and computational layout—challenges that general-purpose compilers struggle to address due to their abstractions of parallelism, locality, and synchronization. This paper introduces MLIR-AIR, an open-source compiler stack built upon a structured asynchronous operation representation, featuring the AIR dialect to enable efficient mapping of AI workloads onto spatial hardware. Its key contributions include compiler-driven spatial scheduling, computational partitioning, and communication–computation overlap—eliminating reliance on manual scheduling or runtime coordination. Experimental evaluation demonstrates 78.7% of peak computational efficiency for matrix multiplication, matching hand-optimized code; moreover, a fused multi-head attention module achieves high performance with only ~150 lines of source code.

Technology Category

Application Category

📝 Abstract
General-purpose compilers abstract away parallelism, locality, and synchronization, limiting their effectiveness on modern spatial architectures. As modern computing architectures increasingly rely on fine-grained control over data movement, execution order, and compute placement for performance, compiler infrastructure must provide explicit mechanisms for orchestrating compute and data to fully exploit such architectures. We introduce MLIR-AIR, a novel, open-source compiler stack built on MLIR that bridges the semantic gap between high-level workloads and fine-grained spatial architectures such as AMD's NPUs. MLIR-AIR defines the AIR dialect, which provides structured representations for asynchronous and hierarchical operations across compute and memory resources. AIR primitives allow the compiler to orchestrate spatial scheduling, distribute computation across hardware regions, and overlap communication with computation without relying on ad hoc runtime coordination or manual scheduling. We demonstrate MLIR-AIR's capabilities through two case studies: matrix multiplication and the multi-head attention block from the LLaMA 2 model. For matrix multiplication, MLIR-AIR achieves up to 78.7% compute efficiency and generates implementations with performance almost identical to state-of-the-art, hand-optimized matrix multiplication written using the lower-level, close-to-metal MLIR-AIE framework. For multi-head attention, we demonstrate that the AIR interface supports fused implementations using approximately 150 lines of code, enabling tractable expression of complex workloads with efficient mapping to spatial hardware. MLIR-AIR transforms high-level structured control flow into spatial programs that efficiently utilize the compute fabric and memory hierarchy of an NPU, leveraging asynchronous execution, tiling, and communication overlap through compiler-managed scheduling.
Problem

Research questions and friction points this paper is trying to address.

Bridging semantic gap between AI workloads and spatial architectures
Providing explicit compiler mechanisms for spatial compute orchestration
Enabling efficient mapping of complex workloads to NPU hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLIR-AIR compiler stack bridges high-level workloads and NPUs
AIR dialect enables asynchronous hierarchical compute and memory operations
Compiler orchestrates spatial scheduling and communication-computation overlap
🔎 Similar Papers
No similar papers found.
Erwei Wang
Erwei Wang
AMD
FPGADeep Neural Network
Samuel Bayliss
Samuel Bayliss
Fellow, AMD Research and Advanced Development
FPGAsPolyhedral ModelMemory Systems
A
Andra Bisca
Research and Advanced Development, AMD, USA
Z
Zachary Blair
Research and Advanced Development, AMD, USA
Kristof Denolf
Kristof Denolf
Principal Engineer, Xilinx
Cost Efficient Vision Processing
J
Jeff Fifield
Research and Advanced Development, AMD, USA
Erika Hunhoff
Erika Hunhoff
PhD Candidate, University of Colorado Boulder
cloud computingoperating systems
P
Phil James-Roxby
Research and Advanced Development, AMD, USA
J
Jack Lo
Research and Advanced Development, AMD, USA
J
Joseph Melber
Research and Advanced Development, AMD, USA
Stephen Neuendorffer
Stephen Neuendorffer
Xilinx
E
Eddie Richter
Research and Advanced Development, AMD, USA
A
Andre Rosti
Research and Advanced Development, AMD, USA
J
J. Setoain
Research and Advanced Development, AMD, USA
G
Gagandeep Singh
Research and Advanced Development, AMD, USA
E
Endri Taka
Research and Advanced Development, AMD, USA
P
Pranathi Vasireddy
Research and Advanced Development, AMD, USA
Z
Zhewen Yu
Research and Advanced Development, AMD, USA
Niansong Zhang
Niansong Zhang
Cornell University
Electronic Design AutomationEfficient Deep Learning
Jinming Zhuang
Jinming Zhuang
Brown University
Heterogeneous ComputingDomain-specific AcceleratorProgramming AbstractionCompiler