🤖 AI Summary
This work addresses the limitations of fixed-complexity generative priors in inverse problems, which often suffer from insufficient expressiveness or overfitting to noise. The authors propose an adaptive-complexity generative prior that dynamically modulates the capacity of diffusion models, normalizing flows, and variational autoencoders via a nested dropout mechanism, tailoring model complexity to the specific demands of each inverse problem. This approach is the first to enable continuous and controllable complexity adjustment across multiple generative model families, accompanied by theoretical guarantees in linear denoising settings. Experimental results demonstrate significant improvements over fixed-complexity baselines across diverse tasks—including compressive sensing, image inpainting, denoising, and phase retrieval—with markedly reduced reconstruction errors.
📝 Abstract
Generative models have emerged as powerful priors for solving inverse problems. These models typically represent a class of natural signals using a single fixed complexity or dimensionality. This can be limiting: depending on the problem, a fixed complexity may result in high representation error if too small, or overfitting to noise if too large. We develop tunable-complexity priors for diffusion models, normalizing flows, and variational autoencoders, leveraging nested dropout. Across tasks including compressed sensing, inpainting, denoising, and phase retrieval, we show empirically that tunable priors consistently achieve lower reconstruction errors than fixed-complexity baselines. In the linear denoising setting, we provide a theoretical analysis that explicitly characterizes how the optimal tuning parameter depends on noise and model structure. This work demonstrates the potential of tunable-complexity generative priors and motivates both the development of supporting theory and their application across a wide range of inverse problems.