🤖 AI Summary
This work identifies and quantifies a phenomenon termed “biased generalization” in diffusion models, wherein generated samples increasingly resemble training data despite continued decreases in test loss during late-stage training, thereby compromising both privacy and generalization quality. By training the same network on two disjoint datasets and employing hierarchical data control, precise score functions, and sample distance metrics, the study reveals a stage-wise learning mechanism inherent in deep diffusion models that underlies this bias. Empirical results demonstrate that the phenomenon is prevalent in real-world image generation tasks, challenging the conventional practice of selecting the optimal stopping point based solely on minimal test loss. These findings offer a new perspective for designing training strategies and evaluating model performance in privacy-sensitive applications.
📝 Abstract
Generalization in generative modeling is defined as the ability to learn an underlying distribution from a finite dataset and produce novel samples, with evaluation largely driven by held-out performance and perceived sample quality. In practice, training is often stopped at the minimum of the test loss, taken as an operational indicator of generalization. We challenge this viewpoint by identifying a phase of biased generalization during training, in which the model continues to decrease the test loss while favoring samples with anomalously high proximity to training data. By training the same network on two disjoint datasets and comparing the mutual distances of generated samples and their similarity to training data, we introduce a quantitative measure of bias and demonstrate its presence on real images. We then study the mechanism of bias, using a controlled hierarchical data model where access to exact scores and ground-truth statistics allows us to precisely characterize its onset. We attribute this phenomenon to the sequential nature of feature learning in deep networks, where coarse structure is learned early in a data-independent manner, while finer features are resolved later in a way that increasingly depends on individual training samples. Our results show that early stopping at the test loss minimum, while optimal under standard generalization criteria, may be insufficient for privacy-critical applications.