🤖 AI Summary
Vision-language pre-trained models (VLPs) exhibit insufficient robustness against adversarial examples. Existing test-time countermeasures (TTCs) suffer from limited perturbation search spaces and low diversity due to reliance on a single gradient direction. To address this, we propose Direction-Orthogonal Counteraction (DOC), a novel TTC framework that (i) introduces orthogonal gradient exploration to expand the perturbation search space, (ii) incorporates momentum-based updates to enhance trajectory stability, and (iii) designs a direction-sensitive scoring mechanism based on average cosine similarity for adaptive perturbation strength adjustment. Evaluated across 16 benchmark datasets, DOC consistently improves robustness under diverse adversarial attacks while preserving high clean-sample accuracy. Our method provides a more reliable and diverse defense strategy for VLPs at test time, advancing the state of test-time robustness for multimodal models.
📝 Abstract
Vision-language pre-training models (VLPs) demonstrate strong multimodal understanding and zero-shot generalization, yet remain vulnerable to adversarial examples, raising concerns about their reliability. Recent work, Test-Time Counterattack (TTC), improves robustness by generating perturbations that maximize the embedding deviation of adversarial inputs using PGD, pushing them away from their adversarial representations. However, due to the fundamental difference in optimization objectives between adversarial attacks and counterattacks, generating counterattacks solely based on gradients with respect to the adversarial input confines the search to a narrow space. As a result, the counterattacks could overfit limited adversarial patterns and lack the diversity to fully neutralize a broad range of perturbations. In this work, we argue that enhancing the diversity and coverage of counterattacks is crucial to improving adversarial robustness in test-time defense. Accordingly, we propose Directional Orthogonal Counterattack (DOC), which augments counterattack optimization by incorporating orthogonal gradient directions and momentum-based updates. This design expands the exploration of the counterattack space and increases the diversity of perturbations, which facilitates the discovery of more generalizable counterattacks and ultimately improves the ability to neutralize adversarial perturbations. Meanwhile, we present a directional sensitivity score based on averaged cosine similarity to boost DOC by improving example discrimination and adaptively modulating the counterattack strength. Extensive experiments on 16 datasets demonstrate that DOC improves adversarial robustness under various attacks while maintaining competitive clean accuracy. Code is available at https://github.com/bookman233/DOC.