🤖 AI Summary
Existing GRPO methods for text-to-image generation face two key challenges: (1) inaccurate shared credit assignment—trajectory-level advantages derived from sparse terminal rewards via group normalization are uniformly backpropagated, failing to capture the exploratory potential of early denoising steps; and (2) conflicting multi-objective reward mixing—predefined weighted fusion of heterogeneous rewards (e.g., text alignment, visual quality, color fidelity), differing markedly in scale and variance, induces gradient instability. This paper proposes Multi-GRPO: a novel tree-structured trajectory design enabling temporal grouping for fine-grained advantage estimation at early steps; reward-type-specific independent grouping and normalization to decouple multi-objective optimization conflicts; and MCTS-inspired sampling combined with sparse terminal rewards for stable policy updates. Evaluated on PickScore-25k and OCR-Color-10, Multi-GRPO significantly improves multi-objective alignment accuracy and training stability.
📝 Abstract
Recently, Group Relative Policy Optimization (GRPO) has shown promising potential for aligning text-to-image (T2I) models, yet existing GRPO-based methods suffer from two critical limitations. (1) extit{Shared credit assignment}: trajectory-level advantages derived from group-normalized sparse terminal rewards are uniformly applied across timesteps, failing to accurately estimate the potential of early denoising steps with vast exploration spaces. (2) extit{Reward-mixing}: predefined weights for combining multi-objective rewards (e.g., text accuracy, visual quality, text color)--which have mismatched scales and variances--lead to unstable gradients and conflicting updates. To address these issues, we propose extbf{Multi-GRPO}, a multi-group advantage estimation framework with two orthogonal grouping mechanisms. For better credit assignment, we introduce tree-based trajectories inspired by Monte Carlo Tree Search: branching trajectories at selected early denoising steps naturally forms emph{temporal groups}, enabling accurate advantage estimation for early steps via descendant leaves while amortizing computation through shared prefixes. For multi-objective optimization, we introduce emph{reward-based grouping} to compute advantages for each reward function extit{independently} before aggregation, disentangling conflicting signals. To facilitate evaluation of multiple objective alignment, we curate extit{OCR-Color-10}, a visual text rendering dataset with explicit color constraints. Across the single-reward extit{PickScore-25k} and multi-objective extit{OCR-Color-10} benchmarks, Multi-GRPO achieves superior stability and alignment performance, effectively balancing conflicting objectives. Code will be publicly available at href{https://github.com/fikry102/Multi-GRPO}{https://github.com/fikry102/Multi-GRPO}.