🤖 AI Summary
This work addresses the challenge of optimizing dataflow for attention computation, which entails a vast design space involving intricate trade-offs among tiling, scheduling, and buffering that are intractable for conventional approaches. The authors propose MMEE, a novel method that, for the first time, encodes cross-operator attention dataflows into a matrix representation and integrates analytical performance modeling, design space pruning, and exhaustive search to jointly optimize both energy efficiency and latency. Under energy-efficiency-oriented optimization, MMEE reduces energy consumption by 48%–50% and latency by 31%–69% compared to state-of-the-art techniques. When prioritizing latency minimization, it simultaneously cuts energy usage by 40%–50% and latency by 40%–69%, while accelerating the search process by 64× to 343×.
📝 Abstract
Attention is a fundamental computational kernel that accounts for the majority of the workload in transformer and LLM computing. Optimizing dataflow is crucial for enhancing both performance and energy efficiency in attention computation. This optimization involves a range of decisions, such as tiling, computation ordering and buffer management, and can be applied at both intra-operator and inter-operator levels, resulting in a highly complex decision space. We propose a new approach to cross-operator dataflow optimization. Its centerpiece is an analytical performance model that spans a large decision space and enables matrix-based encoding of multiple candidate solutions. Built on this foundation, a vast number of solutions can be evaluated rapidly, and with the aid of an effective pruning technique, the optimal solution can be identified through exhaustive enumeration. We refer to our method as MMEE (Matrix Multiplication Encoded Enumeration). The ability to efficiently enumerate a large design space allows MMEE to deliver higher-quality solutions at a substantially faster speed compared to prior approaches. The MMEE approach is evaluated across various test cases for different accelerator configurations. For energy-driven optimization, MMEE reduces energy consumption by 48%-50% and latency by 31%-69%, compared to state-of-the-art methods. For latency-driven optimization, MMEE achieves simultaneous reductions of 40%-50% in energy consumption and 40%-69% in latency, respectively. Additionally, MMEE is $64\times$ to $343\times$ faster than previous works.