🤖 AI Summary
Multimodal foundation models are vulnerable to adversarial hallucination attacks—imperceptible perturbations can disrupt cross-modal alignment and mislead downstream tasks. To address this, we propose a task-agnostic and model-agnostic generative defense framework: it reconstructs perturbed inputs using generative models (e.g., variational autoencoders) and enforces cross-modal embedding consistency via a consensus aggregation mechanism, thereby restoring corrupted alignment. The method imposes no architectural or task-specific assumptions, ensuring broad generalizability. Extensive experiments demonstrate that our approach reduces attack success rates to near zero across diverse state-of-the-art multimodal encoders. Moreover, cross-modal alignment accuracy improves by 4% under clean conditions and by 11% under adversarial perturbations, significantly enhancing model robustness.
📝 Abstract
Multi-modal foundation models align images, text, and other modalities in a shared embedding space but remain vulnerable to adversarial illusions (Zhang et al., 2025), where imperceptible perturbations disrupt cross-modal alignment and mislead downstream tasks. To counteract the effects of adversarial illusions, we propose a task-agnostic mitigation mechanism that reconstructs the input from the attacker's perturbed input through generative models, e.g., Variational Autoencoders (VAEs), to maintain natural alignment. To further enhance our proposed defense mechanism, we adopt a generative sampling strategy combined with a consensus-based aggregation scheme over the outcomes of the generated samples. Our experiments on the state-of-the-art multi-modal encoders show that our approach substantially reduces the illusion attack success rates to near-zero and improves cross-modal alignment by 4% (42 to 46) and 11% (32 to 43) in unperturbed and perturbed input settings respectively, providing an effective and model-agnostic defense against adversarial illusions.