🤖 AI Summary
Quantized neural networks suffer significant performance degradation under domain shifts in dynamic environments. To address this, we propose Zeroth-Order Adaptation (ZOA), a test-time adaptation framework that eliminates backpropagation and updates the model solely via two forward passes. ZOA incorporates a lightweight domain-knowledge management mechanism that enables cross-domain knowledge accumulation and interference suppression with minimal storage overhead. The framework is architecture-agnostic—compatible with both CNNs and Transformers—and supports low-bit quantized models (e.g., W6A6). On ImageNet-C, ZOA improves the average accuracy of a quantized ViT-B by 5.0% over the state-of-the-art FOA, markedly enhancing robustness and generalization. By enabling efficient, gradient-free online adaptation, ZOA establishes a practical new paradigm for resource-constrained deployment scenarios.
📝 Abstract
Quantizing deep models prior to deployment is a widely adopted technique to speed up inference for various real-time applications, such as autonomous driving. However, quantized models often suffer from severe performance degradation in dynamic environments with potential domain shifts and this degradation is significantly more pronounced compared with their full-precision counterparts, as shown by our theoretical and empirical illustrations. To address the domain shift problem, test-time adaptation (TTA) has emerged as an effective solution by enabling models to learn adaptively from test data. Unfortunately, existing TTA methods are often impractical for quantized models as they typically rely on gradient backpropagation--an operation that is unsupported on quantized models due to vanishing gradients, as well as memory and latency constraints. In this paper, we focus on TTA for quantized models to improve their robustness and generalization ability efficiently. We propose a continual zeroth-order adaptation (ZOA) framework that enables efficient model adaptation using only two forward passes, eliminating the computational burden of existing methods. Moreover, we propose a domain knowledge management scheme to store and reuse different domain knowledge with negligible memory consumption, reducing the interference of different domain knowledge and fostering the knowledge accumulation during long-term adaptation. Experimental results on three classical architectures, including quantized transformer-based and CNN-based models, demonstrate the superiority of our methods for quantized model adaptation. On the quantized W6A6 ViT-B model, our ZOA is able to achieve a 5.0% improvement over the state-of-the-art FOA on ImageNet-C dataset. The source code is available at https://github.com/DengZeshuai/ZOA.