Fast and Fusiest: An Optimal Fusion-Aware Mapper for Accelerator Modeling and Evaluation

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing mappers struggle to find optimal dataflow mappings with operator fusion for tensor algebra accelerators within a reasonable time, as their search space grows exponentially with the number of computation steps. This work proposes the Fast and Fusiest Mapper (FFM), the first approach capable of efficiently searching the complete fused mapping space for optimality. FFM introduces a fusion-aware pruning strategy that eliminates suboptimal partial mappings early, combined with accurate performance modeling and partial-mapping stitching techniques to drastically reduce the search space. Evaluated on Transformer workloads, FFM achieves over 1,000× speedup compared to the state-of-the-art while exhibiting near-linear runtime scaling, effectively overcoming the exponential complexity barrier inherent in fused mapping exploration.

Technology Category

Application Category

📝 Abstract
The latency and energy of tensor algebra accelerators depend on how data movement and operations are scheduled (i.e., mapped) onto accelerators, so determining the potential of an accelerator architecture requires both a performance model and a mapper to search for the optimal mapping. A key optimization that the mapper must explore is fusion, meaning holding data on-chip between computation steps, which has been shown to reduce energy and latency by reducing DRAM accesses. However, prior mappers cannot find optimal mappings with fusion (i.e., fused mappings) in a feasible runtime because the number of fused mappings to search increases exponentially with the number of workload computation steps. In this paper, we introduce the Fast and Fusiest Mapper (FFM), the first mapper to quickly find optimal mappings in a comprehensive fused mapspace for tensor algebra workloads. FFM shrinks the search space by pruning subsets of mappings (i.e., partial mappings) that are shown to never be a part of optimal mappings, quickly eliminating all suboptimal mappings with those partial mappings as subsets. Then FFM joins partial mappings to construct optimal fused mappings. We evaluate FFM and show that, although the mapspace size grows exponentially with the number of computation steps, FFM's runtime scales approximately linearly. FFM is orders of magnitude faster ($>1000\times$) than prior state-of-the-art approaches at finding optimal mappings for Transformers.
Problem

Research questions and friction points this paper is trying to address.

mapper
fusion
tensor algebra
accelerator modeling
optimal mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

fusion-aware mapping
tensor algebra accelerators
optimal mapping
mapspace pruning
accelerator modeling
🔎 Similar Papers
T
Tanner Andrulis
MIT
M
Michael Gilbert
MIT
Vivienne Sze
Vivienne Sze
Professor, EECS at MIT
VLSILow-Power DesignMachine LearningRoboticsVideo Coding
J
Joel S. Emer
MIT / Nvidia