🤖 AI Summary
This study investigates the robustness of vision-language models (VLMs) under highly sparse, geometrically constrained, and imperceptible adversarial perturbations, revealing a cross-task semantic failure risk inherent in their shared multimodal representation space. To this end, the authors propose the X-shaped Sparse Pixel Attack (XSPA), which restricts perturbations exclusively to two diagonal crossing lines—modifying only approximately 1.76% of pixels—and jointly optimizes classification loss, cross-task semantic guidance, and along-line smoothness regularization to achieve efficient and transferable attacks. Experiments demonstrate that XSPA drastically degrades zero-shot classification accuracy on COCO by 52.33 points for CLIP ViT-L/14 and 67.00 points for OpenCLIP ViT-B/16, while also substantially reducing image captioning consistency (by up to 58.60 points) and VQA accuracy (by up to 44.38 points). This work introduces fixed geometric priors under extremely low perturbation budgets, establishing a new paradigm for evaluating multimodal robustness.
📝 Abstract
Vision-language models (VLMs) rely on a shared visual-textual representation space to perform tasks such as zero-shot classification, image captioning, and visual question answering (VQA). While this shared space enables strong cross-task generalization, it may also introduce a common vulnerability: small visual perturbations can propagate through the shared embedding space and cause correlated semantic failures across tasks. This risk is particularly important in interactive and decision-support settings, yet it remains unclear whether VLMs are robust to highly constrained, sparse, and geometrically fixed perturbations. To address this question, we propose X-shaped Sparse Pixel Attack (XSPA), an imperceptible structured attack that restricts perturbations to two intersecting diagonal lines. Compared with dense perturbations or flexible localized patches, XSPA operates under a much stricter attack budget and thus provides a more stringent test of VLM robustness. Within this sparse support, XSPA jointly optimizes a classification objective, cross-task semantic guidance, and regularization on perturbation magnitude and along-line smoothness, inducing transferable misclassification as well as semantic drift in captioning and VQA while preserving visual subtlety. Under the default setting, XSPA modifies only about 1.76% of image pixels. Experiments on the COCO dataset show that XSPA consistently degrades performance across all three tasks. Zero-shot accuracy drops by 52.33 points on OpenAI CLIP ViT-L/14 and 67.00 points on OpenCLIP ViT-B/16, while GPT-4-evaluated caption consistency decreases by up to 58.60 points and VQA correctness by up to 44.38 points. These results suggest that even highly sparse and visually subtle perturbations with fixed geometric priors can substantially disrupt cross-task semantics in VLMs, revealing a notable robustness gap in current multimodal systems.