Social Perception of Faces in a Vision-Language Model

📅 2024-08-26
🏛️ Conference on Fairness, Accountability and Transparency
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates bias in the open-source vision-language model CLIP concerning social perception of human faces, focusing on legally protected attributes—including age, gender, and race. We introduce the first controlled experimental paradigm grounded in social psychology: leveraging six-dimensionally disentangled synthetic face images and psychometrically validated textual prompts, integrated with multimodal embedding similarity analysis and a causal attribute ablation framework—moving beyond conventional observational studies based on uncurated “in-the-wild” data. Key findings include: (1) CLIP exhibits extreme, cross-age and cross-expression social perception bias against Black women; (2) while CLIP demonstrates fine-grained social judgment capability, its biases are systematic and pervasive; and (3) facial expression exerts the strongest influence on bias, with lighting-induced bias comparable in magnitude to age-related bias—highlighting that uncontrolled visual confounders severely distort bias attribution.

Technology Category

Application Category

📝 Abstract
We explore social perception of human faces in CLIP, a widely used open-source vision-language model. To this end, we compare the similarity in CLIP embeddings between different textual prompts and a set of face images. Our textual prompts are constructed from well-validated social psychology terms denoting social perception. The face images are synthetic and are systematically and independently varied along six dimensions: the legally protected attributes of age, gender, and race, as well as facial expression, lighting, and pose. Independently and systematically manipulating face attributes allows us to study the effect of each on social perception and avoids confounds that can occur in wild-collected data due to uncontrolled systematic correlations between attributes. Thus, our findings are experimental rather than observational. Our main findings are three. First, while CLIP is trained on the widest variety of images and texts, it is able to make fine-grained human-like social judgments on face images. Second, age, gender, and race do systematically impact CLIP’s social perception of faces, suggesting an undesirable bias in CLIP vis-a-vis legally protected attributes. Most strikingly, we find a strong pattern of bias concerning the faces of Black women, where CLIP produces extreme values of social perception across different ages and facial expressions. Third, facial expression impacts social perception more than age and lighting as much as age. The last finding predicts that studies that do not control for unprotected visual attributes may reach the wrong conclusions on bias. Our novel method of investigation, which is founded on the social psychology literature and on the experiments involving the manipulation of individual attributes, yields sharper and more reliable observations than previous observational methods and may be applied to study biases in any vision-language model.
Problem

Research questions and friction points this paper is trying to address.

Investigating social perception bias in CLIP face embeddings
Analyzing effects of protected attributes on AI social judgments
Developing experimental method to isolate facial attribute impacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using synthetic face images with systematically varied attributes
Applying CLIP embeddings to compare text prompts and faces
Employing controlled attribute manipulation for experimental bias analysis
🔎 Similar Papers
No similar papers found.