Exploring Bias in over 100 Text-to-Image Generative Models

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the dynamic evolution of bias in open-source text-to-image (T2I) models. Motivated by the rapid proliferation of models on platforms like Hugging Face and the associated risk of bias propagation, we propose the first longitudinal (2022–2024), multi-task, three-dimensional quantitative evaluation framework—measuring distributional bias, generation hallucination, and omission rate. Leveraging 107 mainstream open-source T2I models, we construct the reproducible Bias-T2I benchmark and an automated evaluation pipeline integrating prompt-controlled probing, statistical distribution analysis, and generation consistency verification. Our empirical analysis reveals that base model bias decreases on average by 37%, whereas fine-tuned variants exhibit significantly exacerbated bias—particularly in artistic and style-transfer models. These findings provide both a transparent, standardized assessment toolkit and empirically grounded insights to advance AI bias governance and responsible model development.

Technology Category

Application Category

📝 Abstract
We investigate bias trends in text-to-image generative models over time, focusing on the increasing availability of models through open platforms like Hugging Face. While these platforms democratize AI, they also facilitate the spread of inherently biased models, often shaped by task-specific fine-tuning. Ensuring ethical and transparent AI deployment requires robust evaluation frameworks and quantifiable bias metrics. To this end, we assess bias across three key dimensions: (i) distribution bias, (ii) generative hallucination, and (iii) generative miss-rate. Analyzing over 100 models, we reveal how bias patterns evolve over time and across generative tasks. Our findings indicate that artistic and style-transferred models exhibit significant bias, whereas foundation models, benefiting from broader training distributions, are becoming progressively less biased. By identifying these systemic trends, we contribute a large-scale evaluation corpus to inform bias research and mitigation strategies, fostering more responsible AI development. Keywords: Bias, Ethical AI, Text-to-Image, Generative Models, Open-Source Models
Problem

Research questions and friction points this paper is trying to address.

Investigates bias trends in text-to-image generative models.
Assesses bias across distribution, hallucination, and miss-rate dimensions.
Identifies systemic bias patterns to inform ethical AI development.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates bias in text-to-image models
Uses three key bias dimensions
Analyzes over 100 generative models
🔎 Similar Papers
No similar papers found.
J
J. Vice
University of Western Australia
N
Naveed Akhtar
University of Melbourne
Richard Hartley
Richard Hartley
Australian National University, National ICT Australia (NICTA)
Computer Visionoptimizationforensic imaging
A
Ajmal Mian
University of Western Australia