TruncQuant: Truncation-Ready Quantization for DNNs with Flexible Weight Bit Precision

๐Ÿ“… 2025-06-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the severe accuracy degradation caused by dynamic quantization truncation and the inflexibility in adapting to diverse hardware bitwidths during edge-device deployment of DNNs, this paper proposes a โ€œtruncation-readyโ€ quantization-aware training (QAT) paradigm. Methodologically, we introduce a truncation-alignment loss function and a shift-friendly weight scaling mechanism into the QAT framework, enabling a single trained model to natively support runtime bitwidth switching across 1โ€“8-bit weights without retraining or additional overhead. Our key contribution is the first explicit modeling of truncation as an optimization objective during training, thereby unifying bitwidth adaptability with hardware efficiency. On benchmarks including ImageNet, our approach reduces cross-bitwidth inference accuracy drop by 40% compared to conventional QAT, while significantly lowering deployment latency.

Technology Category

Application Category

๐Ÿ“ Abstract
The deployment of deep neural networks on edge devices is a challenging task due to the increasing complexity of state-of-the-art models, requiring efforts to reduce model size and inference latency. Recent studies explore models operating at diverse quantization settings to find the optimal point that balances computational efficiency and accuracy. Truncation, an effective approach for achieving lower bit precision mapping, enables a single model to adapt to various hardware platforms with little to no cost. However, formulating a training scheme for deep neural networks to withstand the associated errors introduced by truncation remains a challenge, as the current quantization-aware training schemes are not designed for the truncation process. We propose TruncQuant, a novel truncation-ready training scheme allowing flexible bit precision through bit-shifting in runtime. We achieve this by aligning TruncQuant with the output of the truncation process, demonstrating strong robustness across bit-width settings, and offering an easily implementable training scheme within existing quantization-aware frameworks. Our code is released at https://github.com/a2jinhee/TruncQuant.
Problem

Research questions and friction points this paper is trying to address.

Reducing DNN model size and latency for edge deployment
Enabling flexible bit precision via truncation in neural networks
Training DNNs to withstand truncation errors effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Truncation-ready training for flexible bit precision
Bit-shifting enables runtime bit precision adjustment
Robust quantization-aware training with truncation alignment
๐Ÿ”Ž Similar Papers
J
Jinhee Kim
Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
S
Seoyeon Yoon
School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
T
Taeho Lee
School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
Joo Chan Lee
Joo Chan Lee
Sungkyunkwan University
Computer VisionEfficient Deep Learning
K
Kang Eun Jeon
Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
Jong Hwan Ko
Jong Hwan Ko
SungKyunKwan Univ. (SKKU)
Deep learning acceleratorImage/audio processingVLSI/IoT systems design