Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Group Relative Policy Optimization (GRPO) in flow-matching-based text-to-image (T2I) generation suffers from biased advantage estimation and failure to model temporal dynamics inherent in the sequential diffusion process. Method: We propose Chunk-level GRPO, a novel paradigm that clusters consecutive diffusion steps into semantically coherent generation chunks. It introduces chunk-level advantage estimation and explicit temporal consistency modeling within the GRPO framework, coupled with a learnable weighted sampling strategy to enhance intra-chunk gradient effectiveness. Contribution/Results: Experiments demonstrate significant improvements over state-of-the-art flow-matching and policy optimization methods in both preference alignment accuracy and image quality (measured by FID and CLIP Score). The results validate that chunk-level optimization is critical for capturing generative dynamics, offering a new direction for controllable, alignment-aware training in T2I synthesis.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) has shown strong potential for flow-matching-based text-to-image (T2I) generation, but it faces two key limitations: inaccurate advantage attribution, and the neglect of temporal dynamics of generation. In this work, we argue that shifting the optimization paradigm from the step level to the chunk level can effectively alleviate these issues. Building on this idea, we propose Chunk-GRPO, the first chunk-level GRPO-based approach for T2I generation. The insight is to group consecutive steps into coherent 'chunk's that capture the intrinsic temporal dynamics of flow matching, and to optimize policies at the chunk level. In addition, we introduce an optional weighted sampling strategy to further enhance performance. Extensive experiments show that ChunkGRPO achieves superior results in both preference alignment and image quality, highlighting the promise of chunk-level optimization for GRPO-based methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses inaccurate advantage attribution in text-to-image generation
Solves neglect of temporal dynamics in flow matching processes
Proposes chunk-level optimization to enhance policy training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chunk-level optimization replaces step-level paradigm
Grouping consecutive steps captures temporal dynamics
Optional weighted sampling strategy enhances performance further
🔎 Similar Papers
No similar papers found.