SOLE: Hardware-Software Co-design of Softmax and LayerNorm for Efficient Transformer Inference

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Transformer inference, Softmax and LayerNorm impose significant computational and memory overheads, hindering real-time performance. This paper proposes SOLE, a hardware-software co-design framework featuring two novel approximate operators: E2Softmax and AILayerNorm. Without requiring model retraining, SOLE achieves high-accuracy, low-overhead approximation via log₂-based quantization, logarithmic-domain division approximation, and low-precision statistic computation—enabling both low-precision arithmetic and sub-8-bit storage. Evaluated against GPU baselines, SOLE delivers multiple-fold improvements in inference throughput and energy efficiency. Against state-of-the-art custom accelerators, it achieves 3.04×–3.86× higher energy efficiency and 2.82×–3.32× better area efficiency. The core contribution is the first retraining-free, hardware-software co-optimized acceleration solution for Transformer inference that simultaneously balances accuracy, latency, energy, and silicon area.

Technology Category

Application Category

📝 Abstract
Transformers have shown remarkable performance in both natural language processing (NLP) and computer vision (CV) tasks. However, their real-time inference speed and efficiency are limited due to the inefficiency in Softmax and Layer Normalization (LayerNorm). Previous works based on function approximation suffer from inefficient implementation as they place emphasis on computation while disregarding memory overhead concerns. Moreover, such methods rely on retraining to compensate for approximation error which can be costly and inconvenient. In this paper, we present SOLE, a hardware-software co-design for Softmax and LayerNorm which is composed of E2Softmax and AILayerNorm. E2Softmax utilizes log2 quantization of exponent function and log-based division to approximate Softmax while AILayerNorm adopts low-precision statistic calculation. Compared with state-of-the-art designs, we achieve both low-precision calculation and low bit-width storage on Softmax and LayerNorm. Experiments show that SOLE maintains inference accuracy without retraining while offering orders of magnitude speedup and energy savings over GPU, achieving 3.04x, 3.86x energy-efficiency improvements and 2.82x, 3.32x area-efficiency improvements over prior state-of-the-art custom hardware for Softmax and LayerNorm, respectively.
Problem

Research questions and friction points this paper is trying to address.

Optimizes inefficient Softmax and LayerNorm in Transformer inference
Reduces memory overhead and computation costs through hardware-software co-design
Maintains accuracy without retraining while improving speed and energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-software co-design for Softmax and LayerNorm
Log2 quantization and log-based division for Softmax
Low-precision statistic calculation for LayerNorm
🔎 Similar Papers
No similar papers found.
W
Wenxun Wang
Department of Electronic Engineering, Tsinghua University, Beijing, China
Shuchang Zhou
Shuchang Zhou
Megvii Inc.
Artificial Intelligence
W
Wenyu Sun
Department of Electronic Engineering, Tsinghua University, Beijing, China
P
Peiqin Sun
MEGVII Technology, Beijing, China
Yongpan Liu
Yongpan Liu
Professor @ Tsinghua University
Machine LearningNonvolatile Memory and ComputingEnergy Efficient VLSIEmbedded SystemDesign Methodology