đ¤ AI Summary
Unified multimodal encoders exhibit poor robustness against adversarial perturbationsâparticularly for non-visual modalities such as audio and point cloudsâand lack systematic robustness analysis.
Method: We first identify cross-modal robustness deficiencies and propose an efficient adversarial calibration framework that requires no modification to pretrained backbones or semantic centers. It adopts a frozen-backbone architecture with lightweight, modality-specific projection heads, jointly optimized via three objectives: fixed-center cross-entropy loss, cleanâadversarial L2 alignment, and cleanâadversarial InfoNCE lossâaugmented by modality-consistency regularization.
Contribution/Results: Evaluated across six modalities and three Bind architectures, our method achieves up to +47.3% robust accuracy (Îľ = 4/255) without degrading zero-shot classification or cross-modal retrieval performanceâin some cases even improving them. The trainable parameters constitute less than 1% of the full model.
đ Abstract
Recent unified multi-modal encoders align a wide range of modalities into a shared representation space, enabling diverse cross-modal tasks. Despite their impressive capabilities, the robustness of these models under adversarial perturbations remains underexplored, which is a critical concern for safety-sensitive applications. In this work, we present the first comprehensive study of adversarial vulnerability in unified multi-modal encoders. We find that even mild adversarial perturbations lead to substantial performance drops across all modalities. Non-visual inputs, such as audio and point clouds, are especially fragile, while visual inputs like images and videos also degrade significantly. To address this, we propose an efficient adversarial calibration framework that improves robustness across modalities without modifying pretrained encoders or semantic centers, ensuring compatibility with existing foundation models. Our method introduces modality-specific projection heads trained solely on adversarial examples, while keeping the backbone and embeddings frozen. We explore three training objectives: fixed-center cross-entropy, clean-to-adversarial L2 alignment, and clean-adversarial InfoNCE, and we introduce a regularization strategy to ensure modality-consistent alignment under attack. Experiments on six modalities and three Bind-style models show that our method improves adversarial robustness by up to 47.3 percent at epsilon = 4/255, while preserving or even improving clean zero-shot and retrieval performance with less than 1 percent trainable parameters.