🤖 AI Summary
This work addresses the performance bottleneck posed by Softmax computation in Transformer multi-head attention (MHA) modules on low-precision edge devices, particularly pronounced in small models and integer-based inference scenarios. To mitigate this, the authors propose Head-Calibrated Clipped-Linear Softmax (HCCS), a bounded monotonic alternative to the exponential function that applies clipped linear mapping to centered attention logits and incorporates lightweight, per-head calibration parameters optimized offline to preserve the original statistical properties. HCCS enables the first native int8 implementation on AMD Versal AI Engines and, when combined with quantization-aware retraining and hardware-aware co-optimization, achieves significantly higher inference throughput than existing bfloat16 or lookup-table-based approaches while maintaining task accuracy.
📝 Abstract
Softmax can become a computational bottleneck in the Transformer model's Multi-Head Attention (MHA) block, particularly in small models under low-precision inference, where exponentiation and normalization incur significant overhead. As such, we suggest using Head-Calibrated Clipped-Linear Softmax (HCCS), a bounded, monotone surrogate to the exponential softmax function, which uses a clipped linear mapping of the max centered attention logits. This approximation produces a stable probability distribution, maintains the ordering of the original logits and has non-negative values. HCCS differs from previous softmax surrogates as it includes a set of lightweight calibration parameters that are optimized offline based on a representative dataset and calibrated for each individual attention head to preserve the statistical properties of the individual heads. We describe a hardware-motivated implementation of HCCS for high-throughput scenarios targeting the AMD Versal AI Engines. The current reference implementations from AMD for this platform rely upon either bfloat16 arithmetic or LUTs to perform the exponential operation, which might limit the throughput of the platform and fail to utilize the high-throughput integer vector processing units of the AI Engine. In contrast, HCCS provides a natural mapping to the AI Engines' int8 multiply accumulate (MAC) units. To the best of our knowledge, this is the first int8 optimized softmax surrogate for AMD AI engines that significantly exceeds the speed performance of other reference implementations while maintaining competitive task accuracy on small or heavily quantized MHA workloads after quantization-aware retraining.