Breaking the Illusion: Consensus-Based Generative Mitigation of Adversarial Illusions in Multi-Modal Embeddings

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal foundation models are vulnerable to adversarial hallucination attacks—imperceptible perturbations can disrupt cross-modal alignment and mislead downstream tasks. To address this, we propose a task-agnostic and model-agnostic generative defense framework: it reconstructs perturbed inputs using generative models (e.g., variational autoencoders) and enforces cross-modal embedding consistency via a consensus aggregation mechanism, thereby restoring corrupted alignment. The method imposes no architectural or task-specific assumptions, ensuring broad generalizability. Extensive experiments demonstrate that our approach reduces attack success rates to near zero across diverse state-of-the-art multimodal encoders. Moreover, cross-modal alignment accuracy improves by 4% under clean conditions and by 11% under adversarial perturbations, significantly enhancing model robustness.

Technology Category

Application Category

📝 Abstract
Multi-modal foundation models align images, text, and other modalities in a shared embedding space but remain vulnerable to adversarial illusions (Zhang et al., 2025), where imperceptible perturbations disrupt cross-modal alignment and mislead downstream tasks. To counteract the effects of adversarial illusions, we propose a task-agnostic mitigation mechanism that reconstructs the input from the attacker's perturbed input through generative models, e.g., Variational Autoencoders (VAEs), to maintain natural alignment. To further enhance our proposed defense mechanism, we adopt a generative sampling strategy combined with a consensus-based aggregation scheme over the outcomes of the generated samples. Our experiments on the state-of-the-art multi-modal encoders show that our approach substantially reduces the illusion attack success rates to near-zero and improves cross-modal alignment by 4% (42 to 46) and 11% (32 to 43) in unperturbed and perturbed input settings respectively, providing an effective and model-agnostic defense against adversarial illusions.
Problem

Research questions and friction points this paper is trying to address.

Mitigates adversarial illusions in multi-modal embedding spaces.
Reconstructs inputs using generative models to preserve alignment.
Enhances defense via consensus-based aggregation of generated samples.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative model reconstruction of perturbed inputs
Consensus-based aggregation of generated samples
Task-agnostic defense improving cross-modal alignment
🔎 Similar Papers
2023-08-22USENIX Security SymposiumCitations: 15