Entropy-Based Dimension-Free Convergence and Loss-Adaptive Schedules for Diffusion Models

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of obtaining dimension-free convergence guarantees for diffusion models, which existing analyses often fail to provide due to their reliance on assumptions about data dimensionality or the geometry of the target distribution. By leveraging an information-theoretic approach, the authors establish a dimension-independent upper bound on the KL divergence between the generated and target distributions using Shannon entropy, thereby enabling the first convergence analysis that dispenses with geometric assumptions. Furthermore, they propose the Loss-Adaptive Schedule (LAS), an adaptive time-step scheduling strategy that depends solely on the training loss and incurs no additional computational overhead. Theoretically, they prove that the KL divergence converges at a rate of O(H²/K), and empirically demonstrate that LAS achieves superior sampling quality compared to prevailing heuristic scheduling methods.

Technology Category

Application Category

📝 Abstract
Diffusion generative models synthesize samples by discretizing reverse-time dynamics driven by a learned score (or denoiser). Existing convergence analyses of diffusion models typically scale at least linearly with the ambient dimension, and sharper rates often depend on intrinsic-dimension assumptions or other geometric restrictions on the target distribution. We develop an alternative, information-theoretic approach to dimension-free convergence that avoids any geometric assumptions. Under mild assumptions on the target distribution, we bound KL divergence between the target and generated distributions by $O(H^2/K)$ (up to endpoint factors), where $H$ is the Shannon entropy and $K$ is the number of sampling steps. Moreover, using a reformulation of the KL divergence, we propose a Loss-Adaptive Schedule (LAS) for efficient discretization of reverse SDE which is lightweight and relies only on the training loss, requiring no post-training heavy computation. Empirically, LAS improves sampling quality over common heuristic schedules.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
dimension-free convergence
KL divergence
entropy
sampling schedules
Innovation

Methods, ideas, or system contributions that make the work stand out.

dimension-free convergence
entropy-based analysis
loss-adaptive schedule
diffusion models
KL divergence bound
🔎 Similar Papers
No similar papers found.