🤖 AI Summary
To address AI inference efficiency bottlenecks on AMD’s second-generation AIE-ML architecture, this paper proposes the first end-to-end compilation framework enabling fully automated mapping of quantized neural networks onto 2D heterogeneous AIE arrays with complete on-chip model execution. Our method introduces: (1) a physical 2D grid-aware graph placement and search algorithm; (2) a structured parallelization scheme that jointly schedules AIE cores and dedicated memory blocks; and (3) bit-accurate compilation, VLIW instruction-level scheduling, explicit dataflow modeling, and native support for fused operators. The framework is forward-compatible with AIE-MLv2, achieves single-core performance at hardware peak throughput, attains 98.6% layer-scaling efficiency, and utilizes 97.4% (296/304) of available AIE tiles. Experimental results demonstrate throughput competitive with GPUs and microsecond-scale latency.
📝 Abstract
Efficient AI inference on AMD's Versal AI Engine (AIE) is challenging due to tightly coupled VLIW execution, explicit datapaths, and local memory management. Prior work focused on first-generation AIE kernel optimizations, without tackling full neural network execution across the 2D array. In this work, we present AIE4ML, the first comprehensive framework for converting AI models automatically into optimized firmware targeting the AIE-ML generation devices, also with forward compatibility for the newer AIE-MLv2 architecture. At the single-kernel level, we attain performance close to the architectural peak. At the graph and system levels, we provide a structured parallelization method that can scale across the 2D AIE-ML fabric and exploit its dedicated memory tiles to stay entirely on-chip throughout the model execution. As a demonstration, we designed a generalized and highly efficient linear-layer implementation with intrinsic support for fused bias addition and ReLU activation. Also, as our framework necessitates the generation of multi-layer implementations, our approach systematically derives deterministic, compact, and topology-optimized placements tailored to the physical 2D grid of the device through a novel graph placement and search algorithm. Finally, the framework seamlessly accepts quantized models imported from high-level tools such as hls4ml or PyTorch while preserving bit-exactness. In layer scaling benchmarks, we achieve up to 98.6% efficiency relative to the single-kernel baseline, utilizing 296 of 304 AIE tiles (97.4%) of the device with entirely on-chip data movement. With evaluations across real-world model topologies, we demonstrate that AIE4ML delivers GPU-class throughput under microsecond latency constraints, making it a practical companion for ultra-low-latency environments such as trigger systems in particle physics experiments.