Adversarial Robustness for Unified Multi-Modal Encoders via Efficient Calibration

📅 2025-05-17
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Unified multimodal encoders exhibit poor robustness against adversarial perturbations—particularly for non-visual modalities such as audio and point clouds—and lack systematic robustness analysis. Method: We first identify cross-modal robustness deficiencies and propose an efficient adversarial calibration framework that requires no modification to pretrained backbones or semantic centers. It adopts a frozen-backbone architecture with lightweight, modality-specific projection heads, jointly optimized via three objectives: fixed-center cross-entropy loss, clean–adversarial L2 alignment, and clean–adversarial InfoNCE loss—augmented by modality-consistency regularization. Contribution/Results: Evaluated across six modalities and three Bind architectures, our method achieves up to +47.3% robust accuracy (ε = 4/255) without degrading zero-shot classification or cross-modal retrieval performance—in some cases even improving them. The trainable parameters constitute less than 1% of the full model.

Technology Category

Application Category

📝 Abstract
Recent unified multi-modal encoders align a wide range of modalities into a shared representation space, enabling diverse cross-modal tasks. Despite their impressive capabilities, the robustness of these models under adversarial perturbations remains underexplored, which is a critical concern for safety-sensitive applications. In this work, we present the first comprehensive study of adversarial vulnerability in unified multi-modal encoders. We find that even mild adversarial perturbations lead to substantial performance drops across all modalities. Non-visual inputs, such as audio and point clouds, are especially fragile, while visual inputs like images and videos also degrade significantly. To address this, we propose an efficient adversarial calibration framework that improves robustness across modalities without modifying pretrained encoders or semantic centers, ensuring compatibility with existing foundation models. Our method introduces modality-specific projection heads trained solely on adversarial examples, while keeping the backbone and embeddings frozen. We explore three training objectives: fixed-center cross-entropy, clean-to-adversarial L2 alignment, and clean-adversarial InfoNCE, and we introduce a regularization strategy to ensure modality-consistent alignment under attack. Experiments on six modalities and three Bind-style models show that our method improves adversarial robustness by up to 47.3 percent at epsilon = 4/255, while preserving or even improving clean zero-shot and retrieval performance with less than 1 percent trainable parameters.
Problem

Research questions and friction points this paper is trying to address.

Study adversarial vulnerability in multi-modal encoders
Address performance drop from mild adversarial perturbations
Propose efficient calibration to improve robustness across modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient adversarial calibration framework for robustness
Modality-specific projection heads on adversarial examples
Regularization for modality-consistent alignment under attack
🔎 Similar Papers
No similar papers found.