🤖 AI Summary
Large vision-language models (LVLMs) exhibit significant robustness deficiencies under black-box adversarial vision-instruction (B-AVI) attacks, yet no systematic benchmark exists to evaluate such vulnerabilities. Method: We introduce BAVI-Bench—the first comprehensive, standardized evaluation benchmark for B-AVI robustness—featuring 23 attack types across three dimensions: image perturbations, textual instruction injection, and multidimensional content biases (e.g., gender, violence, culture, race). It comprises 316K multimodal adversarial examples, covering 10 core capabilities and 5 bias categories. We formally define the B-AVI threat model and propose a fine-grained, reproducible robustness evaluation framework. Contribution/Results: Our evaluation of 14 open-source LVLMs reveals critical shortcomings in content safety, fairness, and robustness; notably, even closed-source models—including GPT-4V and Gemini Pro Vision—exhibit substantial vulnerability and latent biases. The benchmark and code are publicly released and widely adopted by the research community.
📝 Abstract
Large Vision-Language Models (LVLMs) have shown significant progress in responding well to visual-instructions from users. However, these instructions, encompassing images and text, are susceptible to both intentional and inadvertent attacks. Despite the critical importance of LVLMs' robustness against such threats, current research in this area remains limited. To bridge this gap, we introduce B-AVIBench, a framework designed to analyze the robustness of LVLMs when facing various Black-box Adversarial Visual-Instructions (B-AVIs), including four types of image-based B-AVIs, ten types of text-based B-AVIs, and nine types of content bias B-AVIs (such as gender, violence, cultural, and racial biases, among others). We generate 316K B-AVIs encompassing five categories of multimodal capabilities (ten tasks) and content bias. We then conduct a comprehensive evaluation involving 14 open-source LVLMs to assess their performance. B-AVIBench also serves as a convenient tool for practitioners to evaluate the robustness of LVLMs against B-AVIs. Our findings and extensive experimental results shed light on the vulnerabilities of LVLMs, and highlight that inherent biases exist even in advanced closed-source LVLMs like GeminiProVision and GPT-4V. This underscores the importance of enhancing the robustness, security, and fairness of LVLMs. The source code and benchmark are available at https://github.com/zhanghao5201/B-AVIBench.