Backdoor Directions in Vision Transformers

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the internal representation mechanisms and defense strategies against backdoor attacks in Vision Transformers (ViTs). By identifying a "trigger direction" in the activation space, the study reveals its causal role in governing backdoor behavior across both activation and parameter spaces, and for the first time observes that static patch-based and stealthy distributed triggers are processed differently within ViTs. Leveraging mechanistic interpretability, the authors propose a clean-data-free, weight-level detection method that integrates linear intervention, manipulation of activations and parameters, and PGD-based perturbation analysis. This approach enables consistent cross-dataset intervention and high-accuracy detection of diverse backdoor attacks, while establishing an intrinsic connection between backdoor vulnerability and adversarial robustness.

Technology Category

Application Category

📝 Abstract
This paper investigates how Backdoor Attacks are represented within Vision Transformers (ViTs). By assuming knowledge of the trigger, we identify a specific ``trigger direction'' in the model's activations that corresponds to the internal representation of the trigger. We confirm the causal role of this linear direction by showing that interventions in both activation and parameter space consistently modulate the model's backdoor behavior across multiple datasets and attack types. Using this direction as a diagnostic tool, we trace how backdoor features are processed across layers. Our analysis reveals distinct qualitative differences: static-patch triggers follow a different internal logic than stealthy, distributed triggers. We further examine the link between backdoors and adversarial attacks, specifically testing whether PGD-based perturbations (de-)activate the identified trigger mechanism. Finally, we propose a data-free, weight-based detection scheme for stealthy-trigger attacks. Our findings show that mechanistic interpretability offers a robust framework for diagnosing and addressing security vulnerabilities in computer vision.
Problem

Research questions and friction points this paper is trying to address.

Backdoor Attacks
Vision Transformers
Trigger Representation
Model Interpretability
Security Vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backdoor Attacks
Vision Transformers
Trigger Direction
Mechanistic Interpretability
Weight-based Detection
🔎 Similar Papers
No similar papers found.