🤖 AI Summary
This work exposes critical vulnerabilities in multimodal large language models (MMLMs) during post-training alignment: role confusion between user and assistant, and structural sensitivity to image token positioning. Addressing the limitations of existing methods—which focus solely on assistant-side generation and rely on fixed prompt templates—we propose Role-Modality Attack (RMA), a novel adversarial attack that preserves query semantics while perturbing input structure (e.g., role label ordering and image token placement). We systematically define and implement two structured attack classes—role confusion and modality position manipulation—for the first time, demonstrating their composability and consistent negative rejection-direction projection in residual streams. Extensive evaluation across eight settings and multiple vision-language models confirms attack efficacy. Furthermore, our proposed adversarial training framework substantially reduces attack success rates without compromising original task performance.
📝 Abstract
Multimodal Language Models (MMLMs) typically undergo post-training alignment to prevent harmful content generation. However, these alignment stages focus primarily on the assistant role, leaving the user role unaligned, and stick to a fixed input prompt structure of special tokens, leaving the model vulnerable when inputs deviate from these expectations. We introduce Role-Modality Attacks (RMA), a novel class of adversarial attacks that exploit role confusion between the user and assistant and alter the position of the image token to elicit harmful outputs. Unlike existing attacks that modify query content, RMAs manipulate the input structure without altering the query itself. We systematically evaluate these attacks across multiple Vision Language Models (VLMs) on eight distinct settings, showing that they can be composed to create stronger adversarial prompts, as also evidenced by their increased projection in the negative refusal direction in the residual stream, a property observed in prior successful attacks. Finally, for mitigation, we propose an adversarial training approach that makes the model robust against input prompt perturbations. By training the model on a range of harmful and benign prompts all perturbed with different RMA settings, it loses its sensitivity to Role Confusion and Modality Manipulation attacks and is trained to only pay attention to the content of the query in the input prompt structure, effectively reducing Attack Success Rate (ASR) while preserving the model's general utility.