🤖 AI Summary
Large Vision-Language Models (LVLMs) exhibit novel security vulnerabilities when integrating visual inputs, enabling conventional adversarial attacks to bypass built-in safety mechanisms. This work identifies the representational origins of this phenomenon and proposes the first two-stage adversarial evaluation framework: Stage I disentangles attack outcomes into three categories—instruction violation, refusal response, and successful attack; Stage II conducts fine-grained safety alignment assessment by jointly leveraging representational analysis, adversarial modeling, and behavioral quantification against an idealized safety response specification. Our study provides the first systematic characterization of LVLM safety failure modes, establishes a quantifiable multimodal safety evaluation standard, and delivers both theoretical foundations and practical methodologies for enhancing model robustness and safety alignment. (132 words)
📝 Abstract
Large Vision-Language Models (LVLMs) have shown remarkable capabilities across a wide range of multimodal tasks. However, their integration of visual inputs introduces expanded attack surfaces, thereby exposing them to novel security vulnerabilities. In this work, we conduct a systematic representational analysis to uncover why conventional adversarial attacks can circumvent the safety mechanisms embedded in LVLMs. We further propose a novel two stage evaluation framework for adversarial attacks on LVLMs. The first stage differentiates among instruction non compliance, outright refusal, and successful adversarial exploitation. The second stage quantifies the degree to which the model's output fulfills the harmful intent of the adversarial prompt, while categorizing refusal behavior into direct refusals, soft refusals, and partial refusals that remain inadvertently helpful. Finally, we introduce a normative schema that defines idealized model behavior when confronted with harmful prompts, offering a principled target for safety alignment in multimodal systems.