🤖 AI Summary
This work reveals the severe vulnerability of large vision-language models (VLMs) to image-level adversarial perturbations: imperceptible, pixel-level modifications to input images enable precise, token-by-token control over model outputs. To address this, we propose the Vision-language model Manipulation Attack (VMA), the first method to theoretically and empirically demonstrate fine-grained, token-level controllability of VLMs under image inputs. VMA integrates first- and second-order momentum optimization with differentiable geometric transformations, enabling efficient, end-to-end adversarial example generation. We validate VMA across multiple VLMs (e.g., LLaVA, Qwen-VL) and benchmarks, demonstrating its strong attack capabilities—including jailbreaking, privacy leakage, denial-of-service, and sponge sample injection—as well as practical utility, such as copyright watermark embedding. VMA thus exhibits both high destructiveness and precise controllability, establishing a novel paradigm for VLM security analysis and robustness enhancement.
📝 Abstract
Large Vision-Language Models (VLMs) have achieved remarkable success in understanding complex real-world scenarios and supporting data-driven decision-making processes. However, VLMs exhibit significant vulnerability against adversarial examples, either text or image, which can lead to various adversarial outcomes, e.g., jailbreaking, hijacking, and hallucination, etc. In this work, we empirically and theoretically demonstrate that VLMs are particularly susceptible to image-based adversarial examples, where imperceptible perturbations can precisely manipulate each output token. To this end, we propose a novel attack called Vision-language model Manipulation Attack (VMA), which integrates first-order and second-order momentum optimization techniques with a differentiable transformation mechanism to effectively optimize the adversarial perturbation. Notably, VMA can be a double-edged sword: it can be leveraged to implement various attacks, such as jailbreaking, hijacking, privacy breaches, Denial-of-Service, and the generation of sponge examples, etc, while simultaneously enabling the injection of watermarks for copyright protection. Extensive empirical evaluations substantiate the efficacy and generalizability of VMA across diverse scenarios and datasets.