🤖 AI Summary
Open-source large vision-language models (LVLMs) exhibit significant safety vulnerabilities under adversarial prompting, frequently generating toxic or insulting outputs. Method: We propose the first multimodal red-teaming framework grounded in social psychology theory—distinct from conventional text-only red-teaming—designed to simulate realistic social manipulation tactics, with particular focus on the amplifying effects of dark humor and multimodal toxicity completion. Contribution/Results: Evaluating leading LVLMs—including LLaVA, InstructBLIP, Fuyu, and Qwen-VL-Chat—we observe toxicity and insult response rates as high as 21.50% and 13.40%, respectively; existing fine-tuned safety mitigations fail markedly under adversarial prompts. Our work establishes a reproducible benchmark suite and a theory-informed attack paradigm to advance robust safety evaluation and defense for LVLMs.
📝 Abstract
The rapid advancement of Large Vision-Language Models (LVLMs) has enhanced capabilities offering potential applications from content creation to productivity enhancement. Despite their innovative potential, LVLMs exhibit vulnerabilities, especially in generating potentially toxic or unsafe responses. Malicious actors can exploit these vulnerabilities to propagate toxic content in an automated (or semi-) manner, leveraging the susceptibility of LVLMs to deception via strategically crafted prompts without fine-tuning or compute-intensive procedures. Despite the red-teaming efforts and inherent potential risks associated with the LVLMs, exploring vulnerabilities of LVLMs remains nascent and yet to be fully addressed in a systematic manner. This study systematically examines the vulnerabilities of open-source LVLMs, including LLaVA, InstructBLIP, Fuyu, and Qwen, using adversarial prompt strategies that simulate real-world social manipulation tactics informed by social theories. Our findings show that (i) toxicity and insulting are the most prevalent behaviors, with the mean rates of 16.13% and 9.75%, respectively; (ii) Qwen-VL-Chat, LLaVA-v1.6-Vicuna-7b, and InstructBLIP-Vicuna-7b are the most vulnerable models, exhibiting toxic response rates of 21.50%, 18.30% and 17.90%, and insulting responses of 13.40%, 11.70% and 10.10%, respectively; (iii) prompting strategies incorporating dark humor and multimodal toxic prompt completion significantly elevated these vulnerabilities. Despite being fine-tuned for safety, these models still generate content with varying degrees of toxicity when prompted with adversarial inputs, highlighting the urgent need for enhanced safety mechanisms and robust guardrails in LVLM development.