🤖 AI Summary
Existing jailbreaking methods for Large Vision-Language Models (LVLMs) rely on toxic text continuation (Toxic-Continuation), making them ineffective against purely benign inputs. This work introduces the Benign-to-Toxic (B2T) paradigm: it induces harmful model outputs solely by optimizing adversarial images under entirely benign textual prompts—achieving, for the first time, implicit jailbreaking via “benign text + adversarial image → toxic response.” Our approach integrates gradient-based image optimization, multimodal joint safety evaluation, and a black-box transferability testing framework. Experiments demonstrate that B2T significantly outperforms prior methods in both white-box and black-box settings. Moreover, B2T synergizes with text-based jailbreaking to amplify attack efficacy. These results expose a previously underappreciated vulnerability in multimodal alignment—specifically, the susceptibility of visual modality pathways to adversarial manipulation, even when linguistic inputs remain strictly harmless.
📝 Abstract
Optimization-based jailbreaks typically adopt the Toxic-Continuation setting in large vision-language models (LVLMs), following the standard next-token prediction objective. In this setting, an adversarial image is optimized to make the model predict the next token of a toxic prompt. However, we find that the Toxic-Continuation paradigm is effective at continuing already-toxic inputs, but struggles to induce safety misalignment when explicit toxic signals are absent. We propose a new paradigm: Benign-to-Toxic (B2T) jailbreak. Unlike prior work, we optimize adversarial images to induce toxic outputs from benign conditioning. Since benign conditioning contains no safety violations, the image alone must break the model's safety mechanisms. Our method outperforms prior approaches, transfers in black-box settings, and complements text-based jailbreaks. These results reveal an underexplored vulnerability in multimodal alignment and introduce a fundamentally new direction for jailbreak approaches.