NestQuant: Post-Training Integer-Nesting Quantization for On-Device DNN

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing accuracy and efficiency for DNNs deployed on IoT devices with dynamically varying resource constraints, this paper proposes a retraining-free integer nested quantization method. The approach introduces the first post-training quantization (PTQ) framework enabling integer-weight decomposition and bit-level nested storage, achieved via high-bit/low-bit separation and adaptive rounding—thereby supporting dynamic runtime switching between full-precision and reduced-precision inference within a single model. Compared to conventional multi-model PTQ strategies, it significantly reduces both memory footprint and inference-mode switching overhead. On ImageNet-1K, ResNet-101 achieves 78.1% and 77.9% Top-1 accuracy under full-bit and partial-bit modes, respectively, while reducing switching latency by 78.1%. This work unifies model compression and multi-precision inference without requiring model retraining.

Technology Category

Application Category

📝 Abstract
Deploying quantized deep neural network (DNN) models with resource adaptation capabilities on ubiquitous Internet of Things (IoT) devices to provide high-quality AI services can leverage the benefits of compression and meet multi-scenario resource requirements. However, existing dynamic/mixed precision quantization requires retraining or special hardware, whereas post-training quantization (PTQ) has two limitations for resource adaptation: (i) The state-of-the-art PTQ methods only provide one fixed bitwidth model, which makes it challenging to adapt to the dynamic resources of IoT devices; (ii) Deploying multiple PTQ models with diverse bitwidths consumes large storage resources and switching overheads. To this end, this paper introduces a resource-friendly post-training integer-nesting quantization, i.e., NestQuant, for on-device quantized model switching on IoT devices. The proposed NestQuant incorporates the integer weight decomposition, which bit-wise splits quantized weights into higher-bit and lower-bit weights of integer data types. It also contains a decomposed weights nesting mechanism to optimize the higher-bit weights by adaptive rounding and nest them into the original quantized weights. In deployment, we can send and store only one NestQuant model and switch between the full-bit/part-bit model by paging in/out lower-bit weights to adapt to resource changes and reduce consumption. Experimental results on the ImageNet-1K pretrained DNNs demonstrated that the NestQuant model can achieve high performance in top-1 accuracy, and reduce in terms of data transmission, storage consumption, and switching overheads. In particular, the ResNet-101 with INT8 nesting INT6 can achieve 78.1% and 77.9% accuracy for full-bit and part-bit models, respectively, and reduce switching overheads by approximately 78.1% compared with diverse bitwidths PTQ models.
Problem

Research questions and friction points this paper is trying to address.

Adapts DNN models to dynamic IoT resource constraints
Reduces storage and switching overheads in PTQ models
Enables flexible bitwidth switching without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integer weight decomposition for bit-wise splitting
Decomposed weights nesting for adaptive rounding
Single model storage with dynamic bitwidth switching
🔎 Similar Papers
No similar papers found.
Jianhang Xie
Jianhang Xie
City University of Hong Kong, BJTU
Efficient Deep LearningEdge AIAI Security
Chuntao Ding
Chuntao Ding
Beijing Normal University
Edge IntelligenceEdge ComputingDeep Learning
X
Xiaqing Li
Key Laboratory of Big Data & Artificial Intelligence in Transportation (Beijing Jiaotong University), Ministry of Education, School of Computer Science and Technology, Beijing Jiaotong University, Beijing, China
S
Shenyuan Ren
Key Laboratory of Big Data & Artificial Intelligence in Transportation (Beijing Jiaotong University), Ministry of Education, School of Computer Science and Technology, Beijing Jiaotong University, Beijing, China
Yidong Li
Yidong Li
Beijing Jiaotong University
privacy preservingdata miningsocial network analysismultimedia computing
Zhichao Lu
Zhichao Lu
City University of Hong Kong
Evolutionary ComputationBilevel OptimizationNeural Architecture Search