🤖 AI Summary
Conventional backdoor attacks (e.g., BadNet) fail against prompt-driven video segmentation foundation models (VSFMs) like SAM2, achieving <5% attack success rate (ASR), due to gradient alignment in encoders and robustness of attention mechanisms.
Method: This paper identifies the root causes of this robustness and proposes BadVSFM—the first dedicated backdoor framework for VSFMs—based on a two-stage decoupled attack paradigm: (i) perturbing image encoder mappings and (ii) manipulating mask decoder outputs, enabling targeted trigger injection while preserving clean-sample fidelity. It integrates gradient conflict analysis, attention-guided visualization, contrastive encoding constraints, multi-prompt shared target mask supervision, and dual-reference encoder distillation.
Results: Evaluated on two benchmarks across five VSFMs, BadVSFM achieves significantly higher ASR without degrading original segmentation accuracy. Moreover, it evades mainstream defenses, demonstrating both stealthiness and effectiveness.
📝 Abstract
Prompt-driven Video Segmentation Foundation Models (VSFMs) such as SAM2 are increasingly deployed in applications like autonomous driving and digital pathology, raising concerns about backdoor threats. Surprisingly, we find that directly transferring classic backdoor attacks (e.g., BadNet) to VSFMs is almost ineffective, with ASR below 5%. To understand this, we study encoder gradients and attention maps and observe that conventional training keeps gradients for clean and triggered samples largely aligned, while attention still focuses on the true object, preventing the encoder from learning a distinct trigger-related representation. To address this challenge, we propose BadVSFM, the first backdoor framework tailored to prompt-driven VSFMs. BadVSFM uses a two-stage strategy: (1) steer the image encoder so triggered frames map to a designated target embedding while clean frames remain aligned with a clean reference encoder; (2) train the mask decoder so that, across prompt types, triggered frame-prompt pairs produce a shared target mask, while clean outputs stay close to a reference decoder. Extensive experiments on two datasets and five VSFMs show that BadVSFM achieves strong, controllable backdoor effects under diverse triggers and prompts while preserving clean segmentation quality. Ablations over losses, stages, targets, trigger settings, and poisoning rates demonstrate robustness to reasonable hyperparameter changes and confirm the necessity of the two-stage design. Finally, gradient-conflict analysis and attention visualizations show that BadVSFM separates triggered and clean representations and shifts attention to trigger regions, while four representative defenses remain largely ineffective, revealing an underexplored vulnerability in current VSFMs.