Diffusion Model's Generalization Can Be Characterized by Inductive Biases toward a Data-Dependent Ridge Manifold

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the generalization mechanisms of diffusion models when they do not memorize training data, revealing the geometric structure and dynamic evolution of their generated distributions. By introducing a data-dependent log-density ridge manifold, the authors characterize a three-stage behavior of generation trajectories—approach, alignment, and sliding—and quantitatively analyze how normal and tangential motions, influenced by training error, govern cross-modal generation capabilities. Integrating manifold analysis, random feature models, and diffusion dynamics, the study establishes the first explicit link between the inductive bias of diffusion models and the geometry of ridge manifolds, elucidating how architectural bias and training accuracy jointly shape generative behavior. The predicted directional effects are validated on synthetic multimodal distributions and latent-space MNIST experiments, demonstrating applicability across both low- and high-dimensional settings.

Technology Category

Application Category

📝 Abstract
When a diffusion model is not memorizing the training data set, how does it generalize exactly? A quantitative understanding of the distribution it generates would be beneficial to, for example, an assessment of the model's performance for downstream applications. We thus explicitly characterize what diffusion model generates, by proposing a log-density ridge manifold and quantifying how the generated data relate to this manifold as inference dynamics progresses. More precisely, inference undergoes a reach-align-slide process centered around the ridge manifold: trajectories first reach a neighborhood of the manifold, then align as being pushed toward or away from the manifold in normal directions, and finally slide along the manifold in tangent directions. Within the scope of this general behavior, different training errors will lead to different normal and tangent motions, which can be quantified, and these detailed motions characterize when inter-mode generations emerge. More detailed understanding of training dynamics will lead to more accurate quantification of the generation inductive bias, and an example of random feature model will be considered, for which we can explicitly illustrate how diffusion model's inductive biases originate as a composition of architectural bias and training accuracy, and how they evolve with the inference dynamics. Experiments on synthetic multimodal distributions and MNIST latent diffusion support the predicted directional effects, in both low- and high-dimensions.
Problem

Research questions and friction points this paper is trying to address.

diffusion model
generalization
inductive bias
ridge manifold
generation dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion model
ridge manifold
inductive bias
inference dynamics
generalization
🔎 Similar Papers
No similar papers found.