🤖 AI Summary
Diffusion models face a bilevel optimization challenge in downstream task fine-tuning: the upper level requires tuning noise schedules and hyperparameters, while the lower level involves solving high-dimensional probabilistic sampling—resulting in prohibitive computational cost and intractable gradient backpropagation. This work formally introduces the first *first-order* bilevel optimization framework for generative modeling. We propose a sample-efficient gradient estimation method leveraging inference-time lower-level solvers and enable joint end-to-end training of noise schedules and model parameters via diffusion process reparameterization. Our approach avoids second-order derivatives or implicit differentiation approximations, offering both theoretical convergence guarantees and practical efficiency. Experiments demonstrate significant improvements over state-of-the-art baselines in diffusion model fine-tuning and noise schedule optimization.
📝 Abstract
Diffusion models, which iteratively denoise data samples to synthesize high-quality outputs, have achieved empirical success across domains. However, optimizing these models for downstream tasks often involves nested bilevel structures, such as tuning hyperparameters for fine-tuning tasks or noise schedules in training dynamics, where traditional bilevel methods fail due to the infinite-dimensional probability space and prohibitive sampling costs. We formalize this challenge as a generative bilevel optimization problem and address two key scenarios: (1) fine-tuning pre-trained models via an inference-only lower-level solver paired with a sample-efficient gradient estimator for the upper level, and (2) training diffusion models from scratch with noise schedule optimization by reparameterizing the lower-level problem and designing a computationally tractable gradient estimator. Our first-order bilevel framework overcomes the incompatibility of conventional bilevel methods with diffusion processes, offering theoretical grounding and computational practicality. Experiments demonstrate that our method outperforms existing fine-tuning and hyperparameter search baselines.