🤖 AI Summary
To address accumulator overflow—a critical issue degrading accuracy in low-precision post-training quantization (PTQ) of large language models—this paper proposes AXE, the first accumulator-aware PTQ framework. Methodologically, AXE jointly optimizes accumulator modeling and weight/activation quantization, integrating state-of-the-art PTQ algorithms (e.g., GPFQ, OPTQ), enforcing provably safe scaling factor constraints, and introducing multi-stage accumulator modeling with support for generalized multi-level accumulation. Its core contributions are: (i) the first PTQ framework to formally guarantee accumulator overflow avoidance, and (ii) a holistic data-path co-optimization mechanism spanning quantization, scaling, and accumulation. Experiments on image classification and language generation models demonstrate that AXE achieves either a 1–2-bit reduction in accumulator bit-width at comparable accuracy, or significant improvements in Top-1 accuracy and BLEU scores at fixed bit-widths—thereby substantially improving the accuracy–bit-width trade-off.
📝 Abstract
Several recent studies have investigated low-precision accumulation, reporting improvements in throughput, power, and area across various platforms. However, the accompanying proposals have only considered the quantization-aware training (QAT) paradigm, in which models are fine-tuned or trained from scratch with quantization in the loop. As models continue to grow in size, QAT techniques become increasingly more expensive, which has motivated the recent surge in post-training quantization (PTQ) research. To the best of our knowledge, ours marks the first formal study of accumulator-aware quantization in the PTQ setting. To bridge this gap, we introduce AXE, a practical framework of accumulator-aware extensions designed to endow overflow avoidance guarantees to existing layer-wise PTQ algorithms. We theoretically motivate AXE and demonstrate its flexibility by implementing it on top of two state-of-the-art PTQ algorithms: GPFQ and OPTQ. We further generalize AXE to support multi-stage accumulation for the first time, opening the door for full datapath optimization and scaling to large language models (LLMs). We evaluate AXE across image classification and language generation models, and observe significant improvements in the trade-off between accumulator bit width and model accuracy over baseline methods.