๐ค AI Summary
To address the severe accuracy degradation caused by dynamic quantization truncation and the inflexibility in adapting to diverse hardware bitwidths during edge-device deployment of DNNs, this paper proposes a โtruncation-readyโ quantization-aware training (QAT) paradigm. Methodologically, we introduce a truncation-alignment loss function and a shift-friendly weight scaling mechanism into the QAT framework, enabling a single trained model to natively support runtime bitwidth switching across 1โ8-bit weights without retraining or additional overhead. Our key contribution is the first explicit modeling of truncation as an optimization objective during training, thereby unifying bitwidth adaptability with hardware efficiency. On benchmarks including ImageNet, our approach reduces cross-bitwidth inference accuracy drop by 40% compared to conventional QAT, while significantly lowering deployment latency.
๐ Abstract
The deployment of deep neural networks on edge devices is a challenging task due to the increasing complexity of state-of-the-art models, requiring efforts to reduce model size and inference latency. Recent studies explore models operating at diverse quantization settings to find the optimal point that balances computational efficiency and accuracy. Truncation, an effective approach for achieving lower bit precision mapping, enables a single model to adapt to various hardware platforms with little to no cost. However, formulating a training scheme for deep neural networks to withstand the associated errors introduced by truncation remains a challenge, as the current quantization-aware training schemes are not designed for the truncation process. We propose TruncQuant, a novel truncation-ready training scheme allowing flexible bit precision through bit-shifting in runtime. We achieve this by aligning TruncQuant with the output of the truncation process, demonstrating strong robustness across bit-width settings, and offering an easily implementable training scheme within existing quantization-aware frameworks. Our code is released at https://github.com/a2jinhee/TruncQuant.