BackdoorIDS: Zero-shot Backdoor Detection for Pretrained Vision Encoder

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security threat posed by backdoored pre-trained visual encoders to downstream applications. The authors propose a training-free, zero-shot backdoor detection method that operates at inference time by generating a sequence of image embeddings through progressive input masking. For the first time, the approach leverages the phenomenon of attention hijacking and recovery, combined with DBSCAN-based density clustering on the embedding trajectories, to effectively identify backdoored samples. The method is plug-and-play and compatible with diverse architectures—including CNNs, Vision Transformers (ViTs), CLIP, and LLaVA-1.5—demonstrating superior performance over existing defenses across multiple attack types, datasets, and models. Its strong generalization and practical applicability make it a promising solution for real-world deployment.

Technology Category

Application Category

📝 Abstract
Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.
Problem

Research questions and friction points this paper is trying to address.

backdoor detection
pretrained vision encoder
zero-shot
inference-time security
trustworthy AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot backdoor detection
attention hijacking
progressive input masking
embedding trajectory clustering
pretrained vision encoder
🔎 Similar Papers
No similar papers found.
S
Siquan Huang
School of Computer Science and Engineering, South China University of Technology
Yijiang Li
Yijiang Li
Argonne National Laboratory
N
Ningzhi Gao
School of Computer Science and Engineering, South China University of Technology
X
Xingfu Yan
School of Computer Science, South China Normal University
L
Leyu Shi
School of Computer Science and Engineering, South China University of Technology