How I Met Your Bias: Investigating Bias Amplification in Diffusion Models

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models often amplify social biases—such as gender and skin-tone biases—present in training data during image synthesis, yet it remains unclear whether the sampling process itself modulates this bias amplification. This work presents the first systematic causal investigation into how sampling algorithms (e.g., DDIM, Euler) and hyperparameters (e.g., step count, eta) influence bias propagation. Leveraging controlled experiments on Biased MNIST, Multi-Color MNIST, BFFHQ, and Stable Diffusion—combined with quantitative bias metrics—we demonstrate that sampling strategies can actively mitigate or exacerbate bias, inducing bias magnitude shifts exceeding 40%. Crucially, these effects persist even when the underlying diffusion model is held fixed, revealing sampling as a controllable, post-training lever for bias regulation. Our findings challenge the prevailing assumption that bias stems solely from data or model architecture, and establish sampling as a novel, actionable intervention point for fair generative modeling.

Technology Category

Application Category

📝 Abstract
Diffusion-based generative models demonstrate state-of-the-art performance across various image synthesis tasks, yet their tendency to replicate and amplify dataset biases remains poorly understood. Although previous research has viewed bias amplification as an inherent characteristic of diffusion models, this work provides the first analysis of how sampling algorithms and their hyperparameters influence bias amplification. We empirically demonstrate that samplers for diffusion models -- commonly optimized for sample quality and speed -- have a significant and measurable effect on bias amplification. Through controlled studies with models trained on Biased MNIST, Multi-Color MNIST and BFFHQ, and with Stable Diffusion, we show that sampling hyperparameters can induce both bias reduction and amplification, even when the trained model is fixed. Source code is available at https://github.com/How-I-met-your-bias/how_i_met_your_bias.
Problem

Research questions and friction points this paper is trying to address.

Investigates bias amplification in diffusion models
Analyzes sampling algorithms' impact on bias
Demonstrates hyperparameters can reduce or amplify bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes sampling algorithms' impact on bias amplification
Shows hyperparameters can reduce or amplify dataset biases
Empirical studies with controlled datasets and Stable Diffusion
🔎 Similar Papers
No similar papers found.
N
Nathan Roos
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
E
Ekaterina Iakovleva
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
A
Ani Gjergji
MaLGa-DIBRIS, University of Genova, Italy
Vito Paolo Pastore
Vito Paolo Pastore
MaLGa- DIBRIS, Universita' degli studi di Genova
machine learningcomputer visiondeep learningimage cell analysisfunctional connectivity
Enzo Tartaglione
Enzo Tartaglione
Associate Professor, Télécom Paris, Institut Polytechnique de Paris
deep learningcompressionpruningdebiasingfrugal AI