🤖 AI Summary
Deploying Transformers on edge devices is hindered by latency and energy constraints. While INT8 quantization accelerates matrix multiplication, the floating-point softmax operation incurs substantial dequantization–requantization overhead—accounting for 65% of attention latency—and breaks the end-to-end integer dataflow. This work proposes the first plug-and-play fully integer-quantized attention pipeline, centered on the IndexSoftmax operator: it approximates the exponential function entirely in the integer domain via a 32-entry lookup table, sparsity-aware clipping, and integer normalization—eliminating all type conversions. The method targets resource-constrained edge CPUs (e.g., Armv8) and requires no model retraining. Experiments show up to 3.7× speedup and 61% energy reduction over FP16 baselines; it outperforms conventional INT8 attention by 2.0× while incurring negligible accuracy degradation.
📝 Abstract
Deploying Transformer models on edge devices is limited by latency and energy budgets. While INT8 quantization effectively accelerates the primary matrix multiplications, it exposes the softmax as the dominant bottleneck. This stage incurs a costly dequantize-softmax-requantize detour, which can account for up to 65% of total attention latency and disrupts the end-to-end integer dataflow critical for edge hardware efficiency. To address this limitation, we present IntAttention, the first fully integer, plug-and-play attention pipeline without retraining. At the core of our approach lies IndexSoftmax, a hardware-friendly operator that replaces floating-point exponentials entirely within the integer domain. IntAttention integrates sparsity-aware clipping, a 32-entry lookup-table approximation, and direct integer normalization, thereby eliminating all datatype conversion overhead. We evaluate IntAttention and demonstrate consistent and substantial gains. Our method achieves up to 3.7x speedup and 61% energy reduction over FP16 baselines and 2.0x faster than conventional INT8 attention pipelines on Armv8 CPUs. These gains are achieved with high-fidelity accuracy comparable to baselines across diverse language and vision models, enabling practical and efficient Transformer inference on commodity edge devices. Code will be released in later version of this work.