🤖 AI Summary
Prior research on vision-language model (VLM) bias has narrowly focused on gender–occupation associations, overlooking multidimensional, context-sensitive social stereotypes and their systemic impacts across factual accuracy, perceptual interpretation, stereotyping, and downstream decision-making. Method: We introduce VIGNETTE, the first large-scale VQA benchmark comprising over 30 million images, grounded in social psychological theory to enable interpretable evaluation of how VLMs encode social hierarchies and pre-assign capabilities based on visual identity cues. Our approach innovatively integrates social cognitive modeling, bias-sensitive prompt engineering, and statistically rigorous significance testing. Contribution/Results: Experiments uncover counterintuitive stereotypic patterns—including cross-identity trait attribution biases and implicit role assignment tendencies—challenging conventional narrow paradigms. VIGNETTE establishes the first open-source, four-dimensional bias assessment framework for trustworthy multimodal AI, spanning factual, perceptual, stereotypic, and decisional dimensions.
📝 Abstract
While bias in large language models (LLMs) is well-studied, similar concerns in vision-language models (VLMs) have received comparatively less attention. Existing VLM bias studies often focus on portrait-style images and gender-occupation associations, overlooking broader and more complex social stereotypes and their implied harm. This work introduces VIGNETTE, a large-scale VQA benchmark with 30M+ images for evaluating bias in VLMs through a question-answering framework spanning four directions: factuality, perception, stereotyping, and decision making. Beyond narrowly-centered studies, we assess how VLMs interpret identities in contextualized settings, revealing how models make trait and capability assumptions and exhibit patterns of discrimination. Drawing from social psychology, we examine how VLMs connect visual identity cues to trait and role-based inferences, encoding social hierarchies, through biased selections. Our findings uncover subtle, multifaceted, and surprising stereotypical patterns, offering insights into how VLMs construct social meaning from inputs.