🤖 AI Summary
This work proposes an adaptive attack method that exploits the vulnerability of existing vision-language models (VLMs) to prompt perturbations, which can circumvent their safety alignment mechanisms. By integrating Chain-of-Thought (CoT) prompting with the ReAct reasoning framework, the approach constructs stealthy jailbreaking prompts through CoT and iteratively generates semantically consistent adversarial image noise guided by ReAct-based feedback. Evaluated in the post-training phase, this method achieves a high attack success rate (ASR) while preserving the naturalness and inconspicuousness of outputs across both textual and visual modalities. It substantially outperforms current jailbreaking strategies in terms of effectiveness and stealth.
📝 Abstract
Vision-language models (VLMs) have become central to tasks such as visual question answering, image captioning, and text-to-image generation. However, their outputs are highly sensitive to prompt variations, which can reveal vulnerabilities in safety alignment. In this work, we present a jailbreak framework that exploits post-training Chain-of-Thought (CoT) prompting to construct stealthy prompts capable of bypassing safety filters. To further increase attack success rates (ASR), we propose a ReAct-driven adaptive noising mechanism that iteratively perturbs input images based on model feedback. This approach leverages the ReAct paradigm to refine adversarial noise in regions most likely to activate safety defenses, thereby enhancing stealth and evasion. Experimental results demonstrate that the proposed dual-strategy significantly improves ASR while maintaining naturalness in both text and visual domains.