DFQ-ViT: Data-Free Quantization for Vision Transformers without Fine-tuning

๐Ÿ“… 2025-07-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing data-free quantization methods struggle to jointly model global and local features of samples and neglect the substantial distribution shift in intermediate-layer activations between quantized and full-precision models, leading to severe accuracy degradation. This paper proposes the first data-free, fine-tuning-free quantization framework tailored for Vision Transformers (ViTs). First, it introduces a progressive difficulty-aware sample synthesis strategy that integrates generative adversarial learning with hard-example mining to enhance synthetic data quality. Second, it proposes a learnable activation correction matrixโ€”enabling, for the first time in data-free settings, explicit alignment of intermediate-layer activation distributions between quantized and full-precision ViTs. Evaluated on DeiT-Tiny, our 3-bit weight-only quantization achieves a 4.29% accuracy improvement over the state-of-the-art, matching the performance of real-data-based quantization, while significantly reducing energy consumption and deployment costs on edge devices.

Technology Category

Application Category

๐Ÿ“ Abstract
Data-Free Quantization (DFQ) enables the quantization of Vision Transformers (ViTs) without requiring access to data, allowing for the deployment of ViTs on devices with limited resources. In DFQ, the quantization model must be calibrated using synthetic samples, making the quality of these synthetic samples crucial. Existing methods fail to fully capture and balance the global and local features within the samples, resulting in limited synthetic data quality. Moreover, we have found that during inference, there is a significant difference in the distributions of intermediate layer activations between the quantized and full-precision models. These issues lead to a severe performance degradation of the quantized model. To address these problems, we propose a pipeline for Data-Free Quantization for Vision Transformers (DFQ-ViT). Specifically, we synthesize samples in order of increasing difficulty, effectively enhancing the quality of synthetic data. During the calibration and inference stage, we introduce the activation correction matrix for the quantized model to align the intermediate layer activations with those of the full-precision model. Extensive experiments demonstrate that DFQ-ViT achieves remarkable superiority over existing DFQ methods and its performance is on par with models quantized through real data. For example, the performance of DeiT-T with 3-bit weights quantization is 4.29% higher than the state-of-the-art. Our method eliminates the need for fine-tuning, which not only reduces computational overhead but also lowers the deployment barriers for edge devices. This characteristic aligns with the principles of Green Learning by improving energy efficiency and facilitating real-world applications in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Quantizing Vision Transformers without data access
Balancing global and local features in synthetic samples
Aligning activation distributions between quantized and full-precision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthesizes samples by increasing difficulty
Uses activation correction matrix
Eliminates fine-tuning for efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.
Yujia Tong
Yujia Tong
Wuhan University of Technology
Machine LearningEfficient Computing
J
Jingling Yuan
School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Hubei Key Laboratory of Transportation Internet of Things, China
T
Tian Zhang
School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Hubei Key Laboratory of Transportation Internet of Things, China
Jianquan Liu
Jianquan Liu
Director | Senior Principal Researcher, Visual Intelligence Research Laboratories, NEC Corporation
DatabaseMultimediaData MiningInformation Retrieval
C
Chuang Hu
State Key Laboratory of Internet of Things for Smart City, University of Macau, China