🤖 AI Summary
Existing flow-matching models achieve high-quality text-to-image generation but underperform when integrated with reinforcement learning for human preference alignment, primarily due to the uniform time-step assumption, which impedes effective credit assignment of sparse terminal rewards to critical generation steps. To address this, we propose GRPO—a Generalized Reward-based Policy Optimization framework—that introduces learnable process rewards via a trajectory branching mechanism during generation and employs a noise-aware weighting strategy for time-adaptive credit assignment. GRPO eliminates the need for auxiliary intermediate reward models and overcomes the limitations of uniform temporal weighting. Experiments demonstrate that GRPO achieves state-of-the-art performance in both human preference alignment and standard generation benchmarks, significantly improving training efficiency and generated image quality.
📝 Abstract
Recent flow matching models for text-to-image generation have achieved remarkable quality, yet their integration with reinforcement learning for human preference alignment remains suboptimal, hindering fine-grained reward-based optimization. We observe that the key impediment to effective GRPO training of flow models is the temporal uniformity assumption in existing approaches: sparse terminal rewards with uniform credit assignment fail to capture the varying criticality of decisions across generation timesteps, resulting in inefficient exploration and suboptimal convergence. To remedy this shortcoming, we introduce extbf{TempFlow-GRPO} (Temporal Flow GRPO), a principled GRPO framework that captures and exploits the temporal structure inherent in flow-based generation. TempFlow-GRPO introduces two key innovations: (i) a trajectory branching mechanism that provides process rewards by concentrating stochasticity at designated branching points, enabling precise credit assignment without requiring specialized intermediate reward models; and (ii) a noise-aware weighting scheme that modulates policy optimization according to the intrinsic exploration potential of each timestep, prioritizing learning during high-impact early stages while ensuring stable refinement in later phases. These innovations endow the model with temporally-aware optimization that respects the underlying generative dynamics, leading to state-of-the-art performance in human preference alignment and standard text-to-image benchmarks.