π€ AI Summary
Diffusion models suffer from high computational costs due to iterative, dense sampling, hindering practical deployment. This paper proposes a sparsity-regularized diffusion framework and provides the first theoretical proof that such regularization reduces the computational complexity of the diffusion processβfrom scaling with the ambient input dimension to scaling with the intrinsic data dimension. Our method integrates high-dimensional statistical modeling, diffusion dynamical analysis, and rigorous error-bound derivation, and empirically optimizes sampling trajectories across multiple benchmarks. Experiments demonstrate that, while maintaining or even improving generation quality (as measured by FID and LPIPS), our approach significantly reduces the number of sampling steps (by 35β52% on average) and overall computational overhead. The core contribution lies in establishing a formal theoretical link between sparsity regularization and intrinsic-dimension-driven efficiency gains, thereby achieving synergistic optimization of both sample quality and inference speed.
π Abstract
Diffusion models are one of the key architectures of generative AI. Their main drawback, however, is the computational costs. This study indicates that the concept of sparsity, well known especially in statistics, can provide a pathway to more efficient diffusion pipelines. Our mathematical guarantees prove that sparsity can reduce the input dimension's influence on the computational complexity to that of a much smaller intrinsic dimension of the data. Our empirical findings confirm that inducing sparsity can indeed lead to better samples at a lower cost.