🤖 AI Summary
This work systematically investigates the dynamic evolution of bias in open-source text-to-image (T2I) models. Motivated by the rapid proliferation of models on platforms like Hugging Face and the associated risk of bias propagation, we propose the first longitudinal (2022–2024), multi-task, three-dimensional quantitative evaluation framework—measuring distributional bias, generation hallucination, and omission rate. Leveraging 107 mainstream open-source T2I models, we construct the reproducible Bias-T2I benchmark and an automated evaluation pipeline integrating prompt-controlled probing, statistical distribution analysis, and generation consistency verification. Our empirical analysis reveals that base model bias decreases on average by 37%, whereas fine-tuned variants exhibit significantly exacerbated bias—particularly in artistic and style-transfer models. These findings provide both a transparent, standardized assessment toolkit and empirically grounded insights to advance AI bias governance and responsible model development.
📝 Abstract
We investigate bias trends in text-to-image generative models over time, focusing on the increasing availability of models through open platforms like Hugging Face. While these platforms democratize AI, they also facilitate the spread of inherently biased models, often shaped by task-specific fine-tuning. Ensuring ethical and transparent AI deployment requires robust evaluation frameworks and quantifiable bias metrics. To this end, we assess bias across three key dimensions: (i) distribution bias, (ii) generative hallucination, and (iii) generative miss-rate. Analyzing over 100 models, we reveal how bias patterns evolve over time and across generative tasks. Our findings indicate that artistic and style-transferred models exhibit significant bias, whereas foundation models, benefiting from broader training distributions, are becoming progressively less biased. By identifying these systemic trends, we contribute a large-scale evaluation corpus to inform bias research and mitigation strategies, fostering more responsible AI development. Keywords: Bias, Ethical AI, Text-to-Image, Generative Models, Open-Source Models