🤖 AI Summary
This work addresses the hardware inefficiency of Transformer attention mechanisms stemming from their quadratic complexity, particularly exacerbated by sparse and irregular selective token attention patterns that incur substantial memory access overhead. To mitigate this, the authors propose a data locality–centric dynamic scheduling mechanism that, for the first time, integrates sparse access patterns with a runtime trace-driven control-compute co-design architecture. By reordering operand streams and employing prefetching and release strategies for intermediate Query/Key vectors, the approach efficiently manages irregular data flows with minimal scheduling overhead. Experimental results demonstrate that the proposed method achieves up to 1.76× higher system throughput and improves energy efficiency by up to 2.94× compared to existing solutions.
📝 Abstract
Transformers have become the foundation of numerous state-of-the-art AI models across diverse domains, thanks to their powerful attention mechanism for modeling long-range dependencies. However, the quadratic scaling complexity of attention poses significant challenges for efficient hardware implementation. While techniques such as quantization and pruning help mitigate this issue, selective token attention offers a promising alternative by narrowing the attention scope to only the most relevant tokens, reducing computation and filtering out noise. In this work, we propose SATA, a locality-centric dynamic scheduling scheme that proactively manages sparsely distributed access patterns from selective Query-Key operations. By reordering operand flow and exploiting data locality, our approach enables early fetch and retirement of intermediate Query/Key vectors, improving system utilization. We implement and evaluate our token management strategy in a control and compute system, using runtime traces from selective-attention-based models. Experimental results show that our method improves system throughput by up to 1.76x and boosts energy efficiency by 2.94x, while incurring minimal scheduling overhead.