PromptRL: Prompt Matters in RL for Flow-Based Image Generation

๐Ÿ“… 2026-02-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing flow-matching reinforcement learning methods suffer from overfitting under semantically equivalent but stylistically diverse prompts and exhibit low sample efficiency. This work proposes the first end-to-end prompt-augmentation framework that integrates a trainable language model as an embedded prompt-optimization agent within the flow-matching RL architecture. Through a co-training mechanism, the approach jointly optimizes prompt rewriting and generation alignment, effectively mitigating overfitting and significantly improving sample efficiency. The method achieves state-of-the-art results with scores of 0.97 on GenEval, 0.98 on OCR accuracy, and 24.05 on PickScore. Notably, using only 0.06M rollouts, it boosts the EditReward of FLUX.1-Kontext from 1.19 to 1.43โ€”outperforming Gemini 2.5 Flash Image and matching the performance of annotation-intensive approaches like ReasonNet.

Technology Category

Application Category

๐Ÿ“ Abstract
Flow matching models (FMs) have revolutionized text-to-image (T2I) generation, with reinforcement learning (RL) serving as a critical post-training strategy for alignment with reward objectives. In this research, we show that current RL pipelines for FMs suffer from two underappreciated yet important limitations: sample inefficiency due to insufficient generation diversity, and pronounced prompt overfitting, where models memorize specific training formulations and exhibit dramatic performance collapse when evaluated on semantically equivalent but stylistically varied prompts. We present PromptRL (Prompt Matters in RL for Flow-Based Image Generation), a framework that incorporates language models (LMs) as trainable prompt refinement agents directly within the flow-based RL optimization loop. This design yields two complementary benefits: rapid development of sophisticated prompt rewriting capabilities and, critically, a synergistic training regime that reshapes the optimization dynamics. PromptRL achieves state-of-the-art performance across multiple benchmarks, obtaining scores of 0.97 on GenEval, 0.98 on OCR accuracy, and 24.05 on PickScore. Furthermore, we validate the effectiveness of our RL approach on large-scale image editing models, improving the EditReward of FLUX.1-Kontext from 1.19 to 1.43 with only 0.06 million rollouts, surpassing Gemini 2.5 Flash Image (also known as Nano Banana), which scores 1.37, and achieving comparable performance with ReasonNet (1.44), which relied on fine-grained data annotations along with a complex multi-stage training. Our extensive experiments empirically demonstrate that PromptRL consistently achieves higher performance ceilings while requiring over 2$\times$ fewer rollouts compared to naive flow-only RL. Our code is available at https://github.com/G-U-N/UniRL.
Problem

Research questions and friction points this paper is trying to address.

sample inefficiency
prompt overfitting
flow matching
reinforcement learning
text-to-image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PromptRL
flow-based reinforcement learning
prompt refinement
text-to-image generation
sample efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.