Measuring Social Bias in Vision-Language Models with Face-Only Counterfactuals from Real Photos

📅 2026-01-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of evaluating demographic bias in vision-language models, where attributes such as race and gender are often entangled with contextual factors like background and clothing in real-world images, complicating causal attribution. To isolate the influence of demographic characteristics while holding other visual elements constant, the authors propose a counterfactual evaluation paradigm that edits only facial attributes (e.g., race, gender) in authentic photographs. They introduce the FOCUS dataset and REFLECT benchmark, enabling, for the first time, “face-only” counterfactual generation from real images and supporting multi-task bias assessment—including forced-choice decisions, socioeconomic inferences, and salary recommendations. Experiments across five state-of-the-art models reveal that demographic disparities still induce significant bias even under strict visual control, with bias magnitude varying by task formulation, thereby demonstrating the necessity and efficacy of this controlled counterfactual auditing approach.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) are increasingly deployed in socially consequential settings, raising concerns about social bias driven by demographic cues. A central challenge in measuring such social bias is attribution under visual confounding: real-world images entangle race and gender with correlated factors such as background and clothing, obscuring attribution. We propose a \textbf{face-only counterfactual evaluation paradigm} that isolates demographic effects while preserving real-image realism. Starting from real photographs, we generate counterfactual variants by editing only facial attributes related to race and gender, keeping all other visual factors fixed. Based on this paradigm, we construct \textbf{FOCUS}, a dataset of 480 scene-matched counterfactual images across six occupations and ten demographic groups, and propose \textbf{REFLECT}, a benchmark comprising three decision-oriented tasks: two-alternative forced choice, multiple-choice socioeconomic inference, and numeric salary recommendation. Experiments on five state-of-the-art VLMs reveal that demographic disparities persist under strict visual control and vary substantially across task formulations. These findings underscore the necessity of controlled, counterfactual audits and highlight task design as a critical factor in evaluating social bias in multimodal models.
Problem

Research questions and friction points this paper is trying to address.

social bias
vision-language models
attribution
visual confounding
demographic cues
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual evaluation
vision-language models
social bias
face editing
demographic attribution
🔎 Similar Papers
No similar papers found.