Visual Backdoor Attacks on MLLM Embodied Decision Making via Contrastive Trigger Learning

๐Ÿ“… 2025-10-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work identifies and addresses a novel security threat: vision-based backdoor attacks against multimodal large language model (MLLM)-driven embodied agents. Specifically, adversaries exploit common physical objects in real-world environments as visual triggers to induce agents to persistently execute malicious multi-step policies under arbitrary viewing angles and lighting conditions. To enhance attack robustness and generalizability, we propose the first physical-object-triggered vision backdoor attack, introducing Contrastive Trigger Learning (CTL)โ€”a preference-learning framework for trigger recognitionโ€”and a two-stage training paradigm combining supervised fine-tuning with CTL. Evaluated on a newly constructed multi-scene, multi-task, multi-location trigger dataset, our method achieves up to 80% attack success rate across multiple embodied benchmarks and mainstream MLLMs, without degrading benign task performance; trigger recognition accuracy improves by 39% over conventional fine-tuning.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal large language models (MLLMs) have advanced embodied agents by enabling direct perception, reasoning, and planning task-oriented actions from visual inputs. However, such vision driven embodied agents open a new attack surface: visual backdoor attacks, where the agent behaves normally until a visual trigger appears in the scene, then persistently executes an attacker-specified multi-step policy. We introduce BEAT, the first framework to inject such visual backdoors into MLLM-based embodied agents using objects in the environments as triggers. Unlike textual triggers, object triggers exhibit wide variation across viewpoints and lighting, making them difficult to implant reliably. BEAT addresses this challenge by (1) constructing a training set that spans diverse scenes, tasks, and trigger placements to expose agents to trigger variability, and (2) introducing a two-stage training scheme that first applies supervised fine-tuning (SFT) and then our novel Contrastive Trigger Learning (CTL). CTL formulates trigger discrimination as preference learning between trigger-present and trigger-free inputs, explicitly sharpening the decision boundaries to ensure precise backdoor activation. Across various embodied agent benchmarks and MLLMs, BEAT achieves attack success rates up to 80%, while maintaining strong benign task performance, and generalizes reliably to out-of-distribution trigger placements. Notably, compared to naive SFT, CTL boosts backdoor activation accuracy up to 39% under limited backdoor data. These findings expose a critical yet unexplored security risk in MLLM-based embodied agents, underscoring the need for robust defenses before real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Injecting visual backdoors into MLLM-based embodied agents using environmental objects as triggers
Addressing trigger variability across viewpoints and lighting for reliable backdoor activation
Ensuring high attack success while maintaining benign task performance in agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object triggers in environments enable visual backdoor attacks
Two-stage training with supervised fine-tuning and contrastive learning
Contrastive Trigger Learning sharpens decision boundaries for activation
๐Ÿ”Ž Similar Papers
No similar papers found.