🤖 AI Summary
Contemporary text-to-image models (e.g., Stable Diffusion) exhibit significant demographic biases, while mainstream debiasing approaches rely on costly fine-tuning that often degrades generation quality. This work introduces a zero-training, noise-space-driven debiasing paradigm. We first identify structurally coherent “underrepresented-group regions” in the initial diffusion noise space; then, we design a “weak guidance” mechanism that steers sampling trajectories toward these regions without compromising semantic fidelity. Our method is grounded in rigorous noise-space analysis, diffusion-path visualization, and cross-model validation. Evaluated across multiple bias benchmarks, it achieves an average 38% reduction in gender and racial bias, with negligible impact on generation quality (ΔFID < 0.5). Crucially, it incurs no training overhead—requiring only inference-time adjustments.
📝 Abstract
Recent advancements in text-to-image models, such as Stable Diffusion, show significant demographic biases. Existing de-biasing techniques rely heavily on additional training, which imposes high computational costs and risks of compromising core image generation functionality. This hinders them from being widely adopted to real-world applications. In this paper, we explore Stable Diffusion's overlooked potential to reduce bias without requiring additional training. Through our analysis, we uncover that initial noises associated with minority attributes form"minority regions"rather than scattered. We view these"minority regions"as opportunities in SD to reduce bias. To unlock the potential, we propose a novel de-biasing method called 'weak guidance,' carefully designed to guide a random noise to the minority regions without compromising semantic integrity. Through analysis and experiments on various versions of SD, we demonstrate that our proposed approach effectively reduces bias without additional training, achieving both efficiency and preservation of core image generation functionality.