TempFlow-GRPO: When Timing Matters for GRPO in Flow Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing flow-matching models achieve high-quality text-to-image generation but underperform when integrated with reinforcement learning for human preference alignment, primarily due to the uniform time-step assumption, which impedes effective credit assignment of sparse terminal rewards to critical generation steps. To address this, we propose GRPO—a Generalized Reward-based Policy Optimization framework—that introduces learnable process rewards via a trajectory branching mechanism during generation and employs a noise-aware weighting strategy for time-adaptive credit assignment. GRPO eliminates the need for auxiliary intermediate reward models and overcomes the limitations of uniform temporal weighting. Experiments demonstrate that GRPO achieves state-of-the-art performance in both human preference alignment and standard generation benchmarks, significantly improving training efficiency and generated image quality.

Technology Category

Application Category

📝 Abstract
Recent flow matching models for text-to-image generation have achieved remarkable quality, yet their integration with reinforcement learning for human preference alignment remains suboptimal, hindering fine-grained reward-based optimization. We observe that the key impediment to effective GRPO training of flow models is the temporal uniformity assumption in existing approaches: sparse terminal rewards with uniform credit assignment fail to capture the varying criticality of decisions across generation timesteps, resulting in inefficient exploration and suboptimal convergence. To remedy this shortcoming, we introduce extbf{TempFlow-GRPO} (Temporal Flow GRPO), a principled GRPO framework that captures and exploits the temporal structure inherent in flow-based generation. TempFlow-GRPO introduces two key innovations: (i) a trajectory branching mechanism that provides process rewards by concentrating stochasticity at designated branching points, enabling precise credit assignment without requiring specialized intermediate reward models; and (ii) a noise-aware weighting scheme that modulates policy optimization according to the intrinsic exploration potential of each timestep, prioritizing learning during high-impact early stages while ensuring stable refinement in later phases. These innovations endow the model with temporally-aware optimization that respects the underlying generative dynamics, leading to state-of-the-art performance in human preference alignment and standard text-to-image benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving reinforcement learning integration in flow models for human preference alignment
Addressing temporal uniformity in reward assignment for flow model training
Enhancing exploration and convergence in text-to-image generation with temporal structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory branching for precise credit assignment
Noise-aware weighting for modulated policy optimization
Temporally-aware optimization respecting generative dynamics
🔎 Similar Papers
No similar papers found.
Xiaoxuan He
Xiaoxuan He
ZheJiang University
Deep Learning
Siming Fu
Siming Fu
Zhejiang University
LLM,Long-tailed learningMulti-modal
Y
Yuke Zhao
ZheJiang University
W
Wanli Li
ZheJiang University
J
Jian Yang
WeChat Vision, Tencent Inc
Dacheng Yin
Dacheng Yin
University of Science and Technology of China
speech enhancementrepresentation learningspeech editing
F
Fengyun Rao
WeChat Vision, Tencent Inc
B
Bo Zhang
ZheJiang University