🤖 AI Summary
Controllable generation in diffusion models faces dual challenges: reward overfitting and high computational cost of reinforcement learning (RL)-based fine-tuning. This paper proposes SLCD, a supervised learning framework for controllable generation that reformulates the problem as KL-regularized optimization—bypassing RL entirely. Instead, SLCD trains a lightweight classifier via supervised learning to provide online guidance during sampling. Theoretically, we are the first to reduce convergence analysis of controllable generation to no-regret online learning, establishing rigorous global optimality guarantees under KL divergence. Engineering-wise, SLCD requires no model fine-tuning and achieves inference speed nearly identical to the base diffusion model. Empirically, on image and biomolecular sequence (DNA/molecule) generation tasks, SLCD attains state-of-the-art controllability, matches RL-based methods in sample quality, exhibits superior generalization, and substantially reduces training cost and deployment complexity.
📝 Abstract
The controllable generation of diffusion models aims to steer the model to generate samples that optimize some given objective functions. It is desirable for a variety of applications including image generation, molecule generation, and DNA/sequence generation. Reinforcement Learning (RL) based fine-tuning of the base model is a popular approach but it can overfit the reward function while requiring significant resources. We frame controllable generation as a problem of finding a distribution that optimizes a KL-regularized objective function. We present SLCD -- Supervised Learning based Controllable Diffusion, which iteratively generates online data and trains a small classifier to guide the generation of the diffusion model. Similar to the standard classifier-guided diffusion, SLCD's key computation primitive is classification and does not involve any complex concepts from RL or control. Via a reduction to no-regret online learning analysis, we show that under KL divergence, the output from SLCD provably converges to the optimal solution of the KL-regularized objective. Further, we empirically demonstrate that SLCD can generate high quality samples with nearly the same inference time as the base model in both image generation with continuous diffusion and biological sequence generation with discrete diffusion. Our code is available at https://github.com/Owen-Oertell/slcd