🤖 AI Summary
This study systematically evaluates social biases in vision-language models (VLMs) across gender and race dimensions, covering both explicit biases (e.g., multiple-choice and image-comparison tasks) and implicit biases (e.g., image captioning and form generation). To address the lack of integrated assessment, we propose the first unified dual-track evaluation framework that jointly models explicit and implicit bias. Our method introduces interpretable, cross-task bias quantification metrics, integrating multimodal prompt engineering, contrastive reasoning evaluation, attribute-association modeling, and textual statistical analysis. Empirical evaluation on state-of-the-art VLMs—including Gemini-1.5 and GPT-4V—reveals pervasive social biases across all models, with GPT-4V exhibiting the most pronounced implicit bias. As a key contribution, we publicly release the first open-source benchmark for social bias evaluation in VLMs, comprising annotated datasets, modular code, and tools enabling fine-grained bias attribution and reproducible fairness research.
📝 Abstract
This research investigates both explicit and implicit social biases exhibited by Vision-Language Models (VLMs). The key distinction between these bias types lies in the level of awareness: explicit bias refers to conscious, intentional biases, while implicit bias operates subconsciously. To analyze explicit bias, we directly pose questions to VLMs related to gender and racial differences: (1) Multiple-choice questions based on a given image (e.g.,"What is the education level of the person in the image?") (2) Yes-No comparisons using two images (e.g.,"Is the person in the first image more educated than the person in the second image?") For implicit bias, we design tasks where VLMs assist users but reveal biases through their responses: (1) Image description tasks: Models are asked to describe individuals in images, and we analyze disparities in textual cues across demographic groups. (2) Form completion tasks: Models draft a personal information collection form with 20 attributes, and we examine correlations among selected attributes for potential biases. We evaluate Gemini-1.5, GPT-4V, GPT-4o, LLaMA-3.2-Vision and LLaVA-v1.6. Our code and data are publicly available at https://github.com/uscnlp-lime/VisBias.