Proxy Robustness in Vision Language Models is Effortlessly Transferable

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the impracticality of existing adversarial robustness distillation methods for vision-language models (VLMs), which rely on costly adversarially trained teacher models. The authors discover that standard CLIP models—despite lacking adversarial training—exhibit cross-architecture robustness against adversarial examples generated from other model architectures. Leveraging this insight, they propose the Heterogeneous Proxy Transfer (HPT) framework, which formalizes and exploits “proxy adversarial robustness” to enable efficient robustness distillation without adversarial training. Furthermore, they introduce a Generalization-Preserving Disentanglement (GPD) strategy to balance natural generalization and robustness. Experiments across 15 zero-shot datasets demonstrate that HPT-GPD substantially enhances adversarial robustness while maintaining strong natural generalization performance.

Technology Category

Application Category

📝 Abstract
As a pivotal technique for improving the defense of deep models, adversarial robustness transfer via distillation has demonstrated remarkable success in conventional image classification tasks. However, this paradigm encounters critical challenges when applied to vision-language models (VLM) (e.g., CLIP): constructing adversarially robust teacher for large-scale multi-modal models demands prohibitively high computational resources. We bridge this gap by revealing an interesting phenomenon: vanilla CLIP (without adversarial training) exhibits intrinsic defensive capabilities against adversarial examples generated by another CLIP with different architectures. We formally define this as proxy adversarial robustness, and naturally propose a Heterogeneous Proxy Transfer (HPT) framework that establishes cross-architectural robustness distillation channels between CLIP variants, effortlessly enabling the VLM robustness transfer from proxy to target models. Yet, such proxy transfer paradigm easily induces severe overfitting, leading to a sharp degradation in zero-shot natural generalization. To resolve that, we design Generalization-Pivot Decoupling (GPD) by leveraging the difference in learning rate scheduling. This decouples the proxy transfer process into a generalization-anchored warm-up that maintains generalization and a generalization-pulled HPT that promotes adversarial robustness, to achieve an equilibrium between natural generalization and adversarial robustness. Extensive experiments on 15 zero-shot datasets demonstrate the effectiveness of our HPT-GPD method. The code is available at the website of github.com/fxw13/HPT-GPD.
Problem

Research questions and friction points this paper is trying to address.

adversarial robustness
vision-language models
robustness transfer
zero-shot generalization
proxy robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

proxy adversarial robustness
heterogeneous proxy transfer
vision-language models
robustness distillation
generalization-pivot decoupling
🔎 Similar Papers
No similar papers found.
X
Xiaowei Fu
Chongqing Key Laboratory of Bio-perception and Multimodal Intelligent Information Processing, Chongqing University, Chongqing 401331, China; School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 401331, China
Fuxiang Huang
Fuxiang Huang
The Hong Kong University of Science and Technology (HKUST)
Multimodal LearningFoundation model for Vertical DomainDomain Adaptation
Lei Zhang
Lei Zhang
Chongqing University
Computer VisionTrustworthy AIDomain GeneralizationTransfer LearningIntelligent Olfaction