Diversifying Counterattacks: Orthogonal Exploration for Robust CLIP Inference

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language pre-trained models (VLPs) exhibit insufficient robustness against adversarial examples. Existing test-time countermeasures (TTCs) suffer from limited perturbation search spaces and low diversity due to reliance on a single gradient direction. To address this, we propose Direction-Orthogonal Counteraction (DOC), a novel TTC framework that (i) introduces orthogonal gradient exploration to expand the perturbation search space, (ii) incorporates momentum-based updates to enhance trajectory stability, and (iii) designs a direction-sensitive scoring mechanism based on average cosine similarity for adaptive perturbation strength adjustment. Evaluated across 16 benchmark datasets, DOC consistently improves robustness under diverse adversarial attacks while preserving high clean-sample accuracy. Our method provides a more reliable and diverse defense strategy for VLPs at test time, advancing the state of test-time robustness for multimodal models.

Technology Category

Application Category

📝 Abstract
Vision-language pre-training models (VLPs) demonstrate strong multimodal understanding and zero-shot generalization, yet remain vulnerable to adversarial examples, raising concerns about their reliability. Recent work, Test-Time Counterattack (TTC), improves robustness by generating perturbations that maximize the embedding deviation of adversarial inputs using PGD, pushing them away from their adversarial representations. However, due to the fundamental difference in optimization objectives between adversarial attacks and counterattacks, generating counterattacks solely based on gradients with respect to the adversarial input confines the search to a narrow space. As a result, the counterattacks could overfit limited adversarial patterns and lack the diversity to fully neutralize a broad range of perturbations. In this work, we argue that enhancing the diversity and coverage of counterattacks is crucial to improving adversarial robustness in test-time defense. Accordingly, we propose Directional Orthogonal Counterattack (DOC), which augments counterattack optimization by incorporating orthogonal gradient directions and momentum-based updates. This design expands the exploration of the counterattack space and increases the diversity of perturbations, which facilitates the discovery of more generalizable counterattacks and ultimately improves the ability to neutralize adversarial perturbations. Meanwhile, we present a directional sensitivity score based on averaged cosine similarity to boost DOC by improving example discrimination and adaptively modulating the counterattack strength. Extensive experiments on 16 datasets demonstrate that DOC improves adversarial robustness under various attacks while maintaining competitive clean accuracy. Code is available at https://github.com/bookman233/DOC.
Problem

Research questions and friction points this paper is trying to address.

CLIP models remain vulnerable to adversarial examples despite strong multimodal understanding
Existing counterattacks lack diversity and overfit to limited adversarial patterns
Limited counterattack diversity fails to neutralize broad range of perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses orthogonal gradient directions for counterattack diversity
Implements momentum-based updates to expand perturbation search
Applies directional sensitivity score for adaptive counterattack strength
🔎 Similar Papers
No similar papers found.