🤖 AI Summary
Existing diffusion-based text-to-image (T2I) models rely on forward discretization and data-driven score estimation, resulting in slow, unstable sampling that requires many steps. This work proposes ProxT2I, the first T2I framework to incorporate a *conditional proximal operator* into the backward discretization process—replacing conventional score estimation with a principled optimization step. ProxT2I further integrates reward-guided reinforcement learning to align generated images with human preferences. To support training and evaluation, we introduce LAION-Face-T2I-15M, a large-scale, face-centric T2I dataset derived from LAION. Evaluated on lightweight architectures, ProxT2I achieves a 3.2× speedup in sampling while matching state-of-the-art open-source models in FID, CLIP-Score, and human evaluations. Crucially, it reduces both computational overhead and parameter count, demonstrating improved efficiency–quality trade-offs without compromising fidelity or alignment.
📝 Abstract
Diffusion models have emerged as a dominant paradigm for generative modeling across a wide range of domains, including prompt-conditional generation. The vast majority of samplers, however, rely on forward discretization of the reverse diffusion process and use score functions that are learned from data. Such forward and explicit discretizations can be slow and unstable, requiring a large number of sampling steps to produce good-quality samples. In this work we develop a text-to-image (T2I) diffusion model based on backward discretizations, dubbed ProxT2I, relying on learned and conditional proximal operators instead of score functions. We further leverage recent advances in reinforcement learning and policy optimization to optimize our samplers for task-specific rewards. Additionally, we develop a new large-scale and open-source dataset comprising 15 million high-quality human images with fine-grained captions, called LAION-Face-T2I-15M, for training and evaluation. Our approach consistently enhances sampling efficiency and human-preference alignment compared to score-based baselines, and achieves results on par with existing state-of-the-art and open-source text-to-image models while requiring lower compute and smaller model size, offering a lightweight yet performant solution for human text-to-image generation.