🤖 AI Summary
This work addresses the vulnerability of Vision-Language-Action (VLA) models to backdoor attacks during pretraining, which can induce harmful behaviors in downstream embodied tasks. The authors propose Bera, a novel framework that, for the first time, reveals the tendency of backdoor triggers to concentrate within deep attention layers. At test time, Bera identifies anomalous visual tokens by analyzing attention weights, masks suspicious regions, and reconstructs trigger-free images to sever the link between triggers and malicious actions. Notably, this approach requires neither retraining nor modifications to the original training pipeline. Extensive experiments across multiple embodied AI platforms and tasks demonstrate that Bera substantially reduces attack success rates while preserving normal task performance, effectively restoring safe policy behavior in compromised models.
📝 Abstract
Downstream fine-tuning of vision-language-action (VLA) models enhances robotics, yet exposes the pipeline to backdoor risks. Attackers can pretrain VLAs on poisoned data to implant backdoors that remain stealthy but can trigger harmful behavior during inference. However, existing defenses either lack mechanistic insight into multimodal backdoors or impose prohibitive computational costs via full-model retraining. To this end, we uncover a deep-layer attention grabbing mechanism: backdoors redirect late-stage attention and form compact embedding clusters near the clean manifold. Leveraging this insight, we introduce Bera, a test-time backdoor erasure framework that detects tokens with anomalous attention via latent-space localization, masks suspicious regions using deep-layer cues, and reconstructs a trigger-free image to break the trigger-unsafe-action mapping while restoring correct behavior. Unlike prior defenses, Bera requires neither retraining of VLAs nor any changes to the training pipeline. Extensive experiments across multiple embodied platforms and tasks show that Bera effectively maintains nominal performance, significantly reduces attack success rates, and consistently restores benign behavior from backdoored outputs, thereby offering a robust and practical defense mechanism for securing robotic systems.