When Attention Betrays: Erasing Backdoor Attacks in Robotic Policies by Reconstructing Visual Tokens

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of Vision-Language-Action (VLA) models to backdoor attacks during pretraining, which can induce harmful behaviors in downstream embodied tasks. The authors propose Bera, a novel framework that, for the first time, reveals the tendency of backdoor triggers to concentrate within deep attention layers. At test time, Bera identifies anomalous visual tokens by analyzing attention weights, masks suspicious regions, and reconstructs trigger-free images to sever the link between triggers and malicious actions. Notably, this approach requires neither retraining nor modifications to the original training pipeline. Extensive experiments across multiple embodied AI platforms and tasks demonstrate that Bera substantially reduces attack success rates while preserving normal task performance, effectively restoring safe policy behavior in compromised models.

Technology Category

Application Category

📝 Abstract
Downstream fine-tuning of vision-language-action (VLA) models enhances robotics, yet exposes the pipeline to backdoor risks. Attackers can pretrain VLAs on poisoned data to implant backdoors that remain stealthy but can trigger harmful behavior during inference. However, existing defenses either lack mechanistic insight into multimodal backdoors or impose prohibitive computational costs via full-model retraining. To this end, we uncover a deep-layer attention grabbing mechanism: backdoors redirect late-stage attention and form compact embedding clusters near the clean manifold. Leveraging this insight, we introduce Bera, a test-time backdoor erasure framework that detects tokens with anomalous attention via latent-space localization, masks suspicious regions using deep-layer cues, and reconstructs a trigger-free image to break the trigger-unsafe-action mapping while restoring correct behavior. Unlike prior defenses, Bera requires neither retraining of VLAs nor any changes to the training pipeline. Extensive experiments across multiple embodied platforms and tasks show that Bera effectively maintains nominal performance, significantly reduces attack success rates, and consistently restores benign behavior from backdoored outputs, thereby offering a robust and practical defense mechanism for securing robotic systems.
Problem

Research questions and friction points this paper is trying to address.

backdoor attacks
vision-language-action models
robotic policies
multimodal security
adversarial robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor erasure
visual token reconstruction
attention mechanism
vision-language-action models
test-time defense
🔎 Similar Papers
No similar papers found.
X
Xuetao Li
School of Computer Science, Wuhan University
P
Pinhan Fu
School of Computer Science, Wuhan University
Wenke Huang
Wenke Huang
School of Computer Science, Wuhan University
Federated LearningMLLM
N
Nengyuan Pan
Faculty of Artificial Intelligence, Hubei University
S
Songhua Yang
School of Computer Science, Wuhan University
Kaiyan Zhao
Kaiyan Zhao
The University of Tokyo
Natural Language Processing
Guancheng Wan
Guancheng Wan
Computer Science, UCLA
AI AgentAI4ScienceLarge Language ModelTrustworthy AI
M
Mengde Li
Institute of Technological Sciences, Wuhan University
Jifeng Xuan
Jifeng Xuan
Wuhan University
Software EngineeringTestingDebuggingMining Software RepositoriesSBSE
Miao Li
Miao Li
Professor, Wuhan University
RoboticsGraspingDexterous ManipulationLearning from Demonstration