When Cars Have Stereotypes: Auditing Demographic Bias in Objects from Text-to-Image Models

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study is the first to systematically uncover implicit demographic biases—specifically, spurious associations between non-human objects (e.g., automobiles) and human demographic attributes (e.g., gender, race)—in text-to-image generative models, wherein object visual properties (e.g., color, style) are inappropriately conditioned on demographic prompt tokens. We propose SODA, a bias auditing framework that employs controlled-variable prompting and cross-model visual attribute comparison across GPT-Image-1, Imagen 4, and Stable Diffusion to assess five object categories. Analyzing 2,700 generated images, we quantify statistically significant and consistent stereotypical associations; notably, limited generative diversity in certain models exacerbates these biases. SODA is scalable and introduces the first standardized diagnostic tool for evaluating fairness with respect to non-human objects in generative AI. It advances both methodological rigor—through systematic, model-agnostic bias measurement—and practical impact—by highlighting critical fairness risks beyond anthropomorphic subjects.

Technology Category

Application Category

📝 Abstract
While prior research on text-to-image generation has predominantly focused on biases in human depictions, we investigate a more subtle yet pervasive phenomenon: demographic bias in generated objects (e.g., cars). We introduce SODA (Stereotyped Object Diagnostic Audit), a novel framework for systematically measuring such biases. Our approach compares visual attributes of objects generated with demographic cues (e.g., "for young people'') to those from neutral prompts, across 2,700 images produced by three state-of-the-art models (GPT Image-1, Imagen 4, and Stable Diffusion) in five object categories. Through a comprehensive analysis, we uncover strong associations between specific demographic groups and visual attributes, such as recurring color patterns prompted by gender or ethnicity cues. These patterns reflect and reinforce not only well-known stereotypes but also more subtle and unintuitive biases. We also observe that some models generate less diverse outputs, which in turn amplifies the visual disparities compared to neutral prompts. Our proposed auditing framework offers a practical approach for testing, revealing how stereotypes still remain embedded in today's generative models. We see this as an essential step toward more systematic and responsible AI development.
Problem

Research questions and friction points this paper is trying to address.

Investigates demographic bias in generated objects like cars
Introduces SODA framework to measure object-related stereotypes
Analyzes visual disparities across 2,700 images from top models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SODA framework for bias measurement
Compares visual attributes across demographic cues
Audits stereotypes in text-to-image models
🔎 Similar Papers
No similar papers found.