Stepwise Credit Assignment for GRPO on Flow-Matching Models

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of Flow-GRPO, which employs uniform credit assignment and thereby overlooks the heterogeneous contributions of individual steps in the diffusion process, often rewarding suboptimal intermediate states. To overcome this, we propose Stepwise-Flow-GRPO, the first method to implement stepwise credit assignment within flow matching models. Our approach estimates intermediate rewards by leveraging the Tweedie formula based on per-step reward improvements, introduces a gain-based advantage function, and incorporates a DDIM-inspired stochastic differential equation (SDE) to enhance reward quality. By balancing the stochasticity inherent in policy gradients with improved reward accuracy, Stepwise-Flow-GRPO significantly boosts sample efficiency and convergence speed while preserving generation diversity.
📝 Abstract
Flow-GRPO successfully applies reinforcement learning to flow models, but uses uniform credit assignment across all steps. This ignores the temporal structure of diffusion generation: early steps determine composition and content (low-frequency structure), while late steps resolve details and textures (high-frequency details). Moreover, assigning uniform credit based solely on the final image can inadvertently reward suboptimal intermediate steps, especially when errors are corrected later in the diffusion trajectory. We propose Stepwise-Flow-GRPO, which assigns credit based on each step's reward improvement. By leveraging Tweedie's formula to obtain intermediate reward estimates and introducing gain-based advantages, our method achieves superior sample efficiency and faster convergence. We also introduce a DDIM-inspired SDE that improves reward quality while preserving stochasticity for policy gradients.
Problem

Research questions and friction points this paper is trying to address.

credit assignment
flow-matching models
diffusion generation
reinforcement learning
temporal structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

stepwise credit assignment
flow-matching models
reinforcement learning
Tweedie's formula
DDIM-inspired SDE
🔎 Similar Papers
No similar papers found.