π€ AI Summary
This work addresses the limited robustness of post-training quantization (PTQ) in the absence of target-task data by proposing a zero-shot transfer method. It introduces "quantization vectors" extracted via arithmetic operations in weight space and transfers them from a source model to a target Vision Transformer (ViT). This approach demonstrates for the first time that quantization robustness corresponds to a transferable direction in weight space, enabling significant improvements in resilience to low-bit quantization noise without requiring quantization-aware training, fine-tuning, or any data from the target task. Experiments show that the method can enhance PTQ robustness on ViTs by up to 60%, establishing a novel paradigm for low-cost, zero-shot quantization.
π Abstract
We show that robustness to post-training quantization (PTQ) is a transferable direction in weight space. We call this direction the quantization vector: extracted from a donor task by simple weight-space arithmetic, it can be used to patch a receiver model and improve robustness to PTQ-induced noise by as much as 60%, without receiver-side quantization-aware training (QAT). Because the method requires no receiver training data, it provides a zero-shot, low-cost alternative to QAT for extremely low-bit deployment. We demonstrate this on Vision Transformer (ViT) models. More broadly, our results suggest that quantization robustness is not merely a byproduct of task-specific training, but a reusable feature of weight-space geometry that can be transferred rather than retrained.