Rewards Are Enough for Fast Photo-Realistic Text-to-image Generation

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Aligning complex Chinese text prompts with human preferences remains challenging in AIGC-based text-to-image generation. Method: This paper proposes R0, a reward-driven framework that reformulates generation as data-space regularization optimization—abandoning conventional diffusion distillation losses. R0 leverages strong conditional constraints and high-quality reward signals to fully govern the generative process, incorporating reward function optimization, implicit generator parameterization, gradient-guided regularization, and feature-space constraints from pretrained diffusion models. Contribution/Results: R0 achieves end-to-end, high-fidelity Chinese text-to-image synthesis in ≤4 denoising steps. Experiments demonstrate state-of-the-art performance across multiple scales, with significant improvements in semantic fidelity and visual realism. Inference speed increases by 3–5× compared to baseline methods. These results validate the effectiveness of a purely reward-dominated generative paradigm for Chinese AIGC.

Technology Category

Application Category

📝 Abstract
Aligning generated images to complicated text prompts and human preferences is a central challenge in Artificial Intelligence-Generated Content (AIGC). With reward-enhanced diffusion distillation emerging as a promising approach that boosts controllability and fidelity of text-to-image models, we identify a fundamental paradigm shift: as conditions become more specific and reward signals stronger, the rewards themselves become the dominant force in generation. In contrast, the diffusion losses serve as an overly expensive form of regularization. To thoroughly validate our hypothesis, we introduce R0, a novel conditional generation approach via regularized reward maximization. Instead of relying on tricky diffusion distillation losses, R0 proposes a new perspective that treats image generations as an optimization problem in data space which aims to search for valid images that have high compositional rewards. By innovative designs of the generator parameterization and proper regularization techniques, we train state-of-the-art few-step text-to-image generative models with R0 at scales. Our results challenge the conventional wisdom of diffusion post-training and conditional generation by demonstrating that rewards play a dominant role in scenarios with complex conditions. We hope our findings can contribute to further research into human-centric and reward-centric generation paradigms across the broader field of AIGC. Code is available at https://github.com/Luo-Yihong/R0.
Problem

Research questions and friction points this paper is trying to address.

Aligning generated images with complex text prompts and human preferences.
Enhancing controllability and fidelity in text-to-image generation models.
Optimizing image generation through reward maximization and regularization techniques.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward-enhanced diffusion distillation boosts controllability
R0 introduces regularized reward maximization for generation
Optimizes image generation via high compositional rewards
🔎 Similar Papers
No similar papers found.