🤖 AI Summary
Multimodal models are highly vulnerable to adversarial attacks due to strong inter-modal dependencies; however, existing defense methods overlook the heterogeneous contributions of individual modalities to overall robustness, resulting in suboptimal defense efficacy. To address this, we propose a vulnerability-aware adversarial training framework. First, we introduce a modality-specific vulnerability quantification mechanism that identifies robustness bottlenecks per modality via first-order approximate adversarial perturbations. Second, we design modality-adaptive regularization terms that enhance robustness without compromising task accuracy. Our approach uncovers a critical blind spot in conventional multimodal adversarial training—its neglect of modality-wise robustness disparities. Extensive experiments on three standard multimodal benchmarks demonstrate consistent and significant improvements in adversarial robustness: +12.73%, +22.21%, and +11.19%, respectively—substantially outperforming state-of-the-art methods.
📝 Abstract
Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities. However, the interdependencies among modalities increase the susceptibility of multimodal models to adversarial attacks. Existing methods mainly focus on attacks on specific modalities or indiscriminately attack all modalities. In this paper, we find that these approaches ignore the differences between modalities in their contribution to final robustness, resulting in suboptimal robustness performance. To bridge this gap, we introduce Vulnerability-Aware Robust Multimodal Adversarial Training (VARMAT), a probe-in-training adversarial training method that improves multimodal robustness by identifying the vulnerability of each modality. To be specific, VARMAT first explicitly quantifies the vulnerability of each modality, grounded in a first-order approximation of the attack objective (Probe). Then, we propose a targeted regularization term that penalizes modalities with high vulnerability, guiding robust learning while maintaining task accuracy (Training). We demonstrate the enhanced robustness of our method across multiple multimodal datasets involving diverse modalities. Finally, we achieve {12.73%, 22.21%, 11.19%} robustness improvement on three multimodal datasets, revealing a significant blind spot in multimodal adversarial training.