STRAP-ViT: Segregated Tokens with Randomized -- Transformations for Defense against Adversarial Patches in ViTs

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of Vision Transformers (ViTs) to adversarial patches, which exploit the self-attention mechanism by inducing excessive focus on high-contrast local regions, leading to misclassification. To counter this, the authors propose STRAP-ViT, the first defense framework that detects anomalous image patches at the token level using Jensen-Shannon divergence and mitigates their impact during inference through random composite transformations. Notably, STRAP-ViT is training-free, plug-and-play, and relies solely on statistical discrepancies among tokens for efficient detection and suppression of perturbations. Evaluated on ImageNet and CalTech-101, the method achieves classification accuracy close to that on clean samples—degrading by only 2–3%—against diverse attacks including Adversarial Patch and LAVAN, significantly outperforming current state-of-the-art defenses.

Technology Category

Application Category

📝 Abstract
Adversarial patches are physically realizable localized noise, which are able to hijack Vision Transformers (ViT) self-attention, pulling focus toward a small, high-contrast region and corrupting the class token to force confident misclassifications. In this paper, we claim that the tokens which correspond to the areas of the image that contain the adversarial noise, have different statistical properties when compared to the tokens which do not overlap with the adversarial perturbations. We use this insight to propose a mechanism, called STRAP-ViT, which uses Jensen-Shannon Divergence as a metric for segregating tokens that behave as anomalies in the Detection Phase, and then apply randomized composite transformations on them during the Mitigation Phase to make the adversarial noise ineffective. The minimum number of tokens to transform is a hyper-parameter for the defense mechanism and is chosen such that at least 50% of the patch is covered by the transformed tokens. STRAP-ViT fits as a non-trainable plug-and-play block within the ViT architectures, for inference purposes only, with a minimal computational cost and does not require any additional training cost/effort. STRAP-ViT has been tested on multiple pre-trained vision transformer architectures (ViT-base-16 and DinoV2) and datasets (ImageNet and CalTech-101), across multiple adversarial attacks (Adversarial Patch, LAVAN, GDPA and RP2), and found to provide excellent robust accuracies lying within a 2-3% range of the clean baselines, and outperform the state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

adversarial patches
Vision Transformers
self-attention hijacking
robustness
misclassification
Innovation

Methods, ideas, or system contributions that make the work stand out.

STRAP-ViT
adversarial patch defense
token segregation
Jensen-Shannon divergence
randomized transformations
🔎 Similar Papers
No similar papers found.