๐ค AI Summary
Existing data-free quantization methods struggle to jointly model global and local features of samples and neglect the substantial distribution shift in intermediate-layer activations between quantized and full-precision models, leading to severe accuracy degradation. This paper proposes the first data-free, fine-tuning-free quantization framework tailored for Vision Transformers (ViTs). First, it introduces a progressive difficulty-aware sample synthesis strategy that integrates generative adversarial learning with hard-example mining to enhance synthetic data quality. Second, it proposes a learnable activation correction matrixโenabling, for the first time in data-free settings, explicit alignment of intermediate-layer activation distributions between quantized and full-precision ViTs. Evaluated on DeiT-Tiny, our 3-bit weight-only quantization achieves a 4.29% accuracy improvement over the state-of-the-art, matching the performance of real-data-based quantization, while significantly reducing energy consumption and deployment costs on edge devices.
๐ Abstract
Data-Free Quantization (DFQ) enables the quantization of Vision Transformers (ViTs) without requiring access to data, allowing for the deployment of ViTs on devices with limited resources. In DFQ, the quantization model must be calibrated using synthetic samples, making the quality of these synthetic samples crucial. Existing methods fail to fully capture and balance the global and local features within the samples, resulting in limited synthetic data quality. Moreover, we have found that during inference, there is a significant difference in the distributions of intermediate layer activations between the quantized and full-precision models. These issues lead to a severe performance degradation of the quantized model. To address these problems, we propose a pipeline for Data-Free Quantization for Vision Transformers (DFQ-ViT). Specifically, we synthesize samples in order of increasing difficulty, effectively enhancing the quality of synthetic data. During the calibration and inference stage, we introduce the activation correction matrix for the quantized model to align the intermediate layer activations with those of the full-precision model. Extensive experiments demonstrate that DFQ-ViT achieves remarkable superiority over existing DFQ methods and its performance is on par with models quantized through real data. For example, the performance of DeiT-T with 3-bit weights quantization is 4.29% higher than the state-of-the-art. Our method eliminates the need for fine-tuning, which not only reduces computational overhead but also lowers the deployment barriers for edge devices. This characteristic aligns with the principles of Green Learning by improving energy efficiency and facilitating real-world applications in resource-constrained environments.