🤖 AI Summary
To address frequent structural artifacts (e.g., distorted hands, faces, arms) and low-fidelity samples in text-to-image generation, this paper proposes a self-guidance mechanism that requires no retraining of diffusion or flow models. The method dynamically identifies and suppresses anomalous generation paths by analyzing the decay characteristics of sampling probability density across noise levels, enabling plug-and-play quality enhancement. Its core innovation is the first training-free guidance paradigm relying solely on intrinsic model sampling probabilities—eliminating dependence on task-specific training data, auxiliary networks, or architectural priors. Compatible with both UNet- and Transformer-based architectures, it supports mainstream models including Stable Diffusion 3.5 and FLUX. Experiments demonstrate superior performance over existing guidance methods in FID and human preference scores, while significantly improving anatomical plausibility and effectively eliminating canonical structural artifacts.
📝 Abstract
Proper guidance strategies are essential to achieve high-quality generation results without retraining diffusion and flow-based text-to-image models. Existing guidance either requires specific training or strong inductive biases of diffusion model networks, potentially limiting their applications. Motivated by the observation that artifact outliers can be detected by a significant decline in the density from a noisier to a cleaner noise level, we propose Self-Guidance (SG), which improves the image quality by suppressing the generation of low-quality samples. SG only relies on the sampling probabilities of its own diffusion model at different noise levels with no need of any guidance-specific training. This makes it flexible to be used in a plug-and-play manner with other sampling algorithms, maximizing its potential to achieve competitive performances in many generative tasks. We conduct experiments on text-to-image and text-to-video generation with different architectures, including UNet and transformer models. With open-sourced diffusion models such as Stable Diffusion 3.5 and FLUX, Self-Guidance surpasses existing algorithms on multiple metrics, including both FID and Human Preference Score. Moreover, we find that SG has a surprisingly positive effect on the generation of physiologically correct human body structures such as hands, faces, and arms, showing its ability of eliminating human body artifacts with minimal efforts. We will release our code along with this paper.