Policy Optimized Text-to-Image Pipeline Design

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated design of text-to-image multi-component pipelines faces two major bottlenecks: prohibitively high computational cost and poor generalization across tasks. Method: We propose the first end-to-end reinforcement learning framework that eliminates reliance on costly image generation for evaluation, introducing a novel image-generation-free ensemble reward model. Our approach employs a two-stage optimization strategy—lexical pretraining followed by Generalized Reinforcement Policy Optimization (GRPO)—and incorporates Classifier-Free Guidance (CFG)-guided model interpolation to enhance structural diversity. Contribution/Results: Without rendering any images, our method enables efficient workflow sequence modeling and search. It achieves state-of-the-art performance in image fidelity, structural novelty, and cross-task generalization, significantly outperforming existing baselines while reducing training overhead substantially.

Technology Category

Application Category

📝 Abstract
Text-to-image generation has evolved beyond single monolithic models to complex multi-component pipelines. These combine fine-tuned generators, adapters, upscaling blocks and even editing steps, leading to significant improvements in image quality. However, their effective design requires substantial expertise. Recent approaches have shown promise in automating this process through large language models (LLMs), but they suffer from two critical limitations: extensive computational requirements from generating images with hundreds of predefined pipelines, and poor generalization beyond memorized training examples. We introduce a novel reinforcement learning-based framework that addresses these inefficiencies. Our approach first trains an ensemble of reward models capable of predicting image quality scores directly from prompt-workflow combinations, eliminating the need for costly image generation during training. We then implement a two-phase training strategy: initial workflow vocabulary training followed by GRPO-based optimization that guides the model toward higher-performing regions of the workflow space. Additionally, we incorporate a classifier-free guidance based enhancement technique that extrapolates along the path between the initial and GRPO-tuned models, further improving output quality. We validate our approach through a set of comparisons, showing that it can successfully create new flows with greater diversity and lead to superior image quality compared to existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Automating multi-component text-to-image pipeline design efficiently
Reducing computational costs in pipeline optimization without image generation
Improving generalization and diversity in generated image workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes text-to-image workflows
Reward models predict quality without image generation
GRPO-based and guidance techniques enhance output
🔎 Similar Papers
No similar papers found.