🤖 AI Summary
This work addresses the security threat posed by backdoored pre-trained visual encoders to downstream applications. The authors propose a training-free, zero-shot backdoor detection method that operates at inference time by generating a sequence of image embeddings through progressive input masking. For the first time, the approach leverages the phenomenon of attention hijacking and recovery, combined with DBSCAN-based density clustering on the embedding trajectories, to effectively identify backdoored samples. The method is plug-and-play and compatible with diverse architectures—including CNNs, Vision Transformers (ViTs), CLIP, and LLaVA-1.5—demonstrating superior performance over existing defenses across multiple attack types, datasets, and models. Its strong generalization and practical applicability make it a promising solution for real-world deployment.
📝 Abstract
Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.