Complementary Text-Guided Attention for Zero-Shot Adversarial Robustness

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of pretrained vision-language models, such as CLIP, to adversarial attacks in zero-shot settings, where perturbations disrupt text-guided attention and degrade robustness. To mitigate this, the authors propose TGA-ZSR, a novel framework that explicitly leverages the impact of adversarial perturbations on text-guided attention. It introduces local attention refinement and global attention constraint modules to enhance robustness while preserving CLIP’s zero-shot generalization capability. Furthermore, a complementary text-guided attention mechanism is developed to fuse foreground attention driven by both class-specific and non-class prompts, improving feature representation quality. Evaluated across 16 datasets, TGA-ZSR and its variant Comp-TGA achieve absolute improvements of 9.58% and 11.95%, respectively, in zero-shot adversarial accuracy over the current state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
Due to the impressive zero-shot capabilities, pre-trained vision-language models (e.g., CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR). This framework incorporates two components: Local Attention Refinement Module and Global Attention Constraint Module. Our goal is to maintain the generalization of the CLIP model and enhance its adversarial robustness. Additionally, the Global Attention Constraint Module acquires text-guided attention from both the target and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. However, we observe that the method occasionally focuses on irrelevant or spurious features, which can lead to suboptimal performance and undermine its robustness in certain scenarios. To overcome this limitation, we further propose a novel approach called Complementary Text-Guided Attention (Comp-TGA). This method integrates two types of foreground attention: attention guided by the class prompt and reversed attention driven by the non-class prompt. These complementary attention mechanisms allow the model to capture a more comprehensive and accurate representation of the foreground. The experiments validate that TGA-ZSR and Comp-TGA yield 9.58% and 11.95% improvements respectively, in zero-shot robust accuracy over the current state-of-the-art techniques across 16 datasets.
Problem

Research questions and friction points this paper is trying to address.

zero-shot
adversarial robustness
vision-language models
CLIP
adversarial examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-Guided Attention
Zero-Shot Adversarial Robustness
Complementary Attention
Vision-Language Models
CLIP
πŸ”Ž Similar Papers
No similar papers found.