🤖 AI Summary
Image classification models often inherit societal biases—such as stereotypical associations between blond hair and femininity—due to imbalanced group distributions in training data.
Method: We propose a fairness-aware diffusion-based data synthesis framework: (i) fine-tune Stable Diffusion with LoRA and DreamBooth for subgroup-specific generation; (ii) apply intra-group clustering to enable collaborative modeling across multiple subgroups, preventing model overload; and (iii) adopt a two-stage paradigm—pretraining on synthetic data followed by fine-tuning on real data—to generate high-fidelity, group-balanced images.
Results: Our method significantly outperforms standard diffusion models across multiple benchmarks. Under strong bias conditions, it matches the performance of the state-of-the-art debiasing approach Group-DRO, while simultaneously enhancing the representativeness and robustness of synthetic data against bias propagation.
📝 Abstract
Image classification systems often inherit biases from uneven group representation in training data. For example, in face datasets for hair color classification, blond hair may be disproportionately associated with females, reinforcing stereotypes. A recent approach leverages the Stable Diffusion model to generate balanced training data, but these models often struggle to preserve the original data distribution. In this work, we explore multiple diffusion-finetuning techniques, e.g., LoRA and DreamBooth, to generate images that more accurately represent each training group by learning directly from their samples. Additionally, in order to prevent a single DreamBooth model from being overwhelmed by excessive intra-group variations, we explore a technique of clustering images within each group and train a DreamBooth model per cluster. These models are then used to generate group-balanced data for pretraining, followed by fine-tuning on real data. Experiments on multiple benchmarks demonstrate that the studied finetuning approaches outperform vanilla Stable Diffusion on average and achieve results comparable to SOTA debiasing techniques like Group-DRO, while surpassing them as the dataset bias severity increases.