🤖 AI Summary
This work addresses the challenge of jointly optimizing reasoning-driven text and image generation within a unified framework to support multimodal interleaved generation tasks. The problem is formulated as a Markov decision process with sparse terminal rewards, and a unified reinforcement learning framework based on GRPO is proposed to co-optimize text and image generation policies. Key methodological innovations include abandoning classifier-free guidance to enable linear generation trajectories and replacing the latent-space KL penalty with an MSE regularization on the velocity field. Experimental results demonstrate that the proposed approach significantly improves the quality of reasoning-guided image generation and establishes a scalable baseline for post-training multimodal interleaved generative models.
📝 Abstract
Unified models capable of interleaved generation have emerged as a promising paradigm, with the community increasingly converging on autoregressive modeling for text and flow matching for image generation. To advance this direction, we propose a unified reinforcement learning framework tailored for interleaved generation. We validate our approach on its fundamental unit: a single round of reasoning-driven image generation, where the model first expands the user prompt through reasoning, followed by image synthesis. Formulating this multimodal generation process as a Markov Decision Process with sparse terminal rewards, we introduce UniGRPO to jointly optimize text and image generation policies using GRPO. Adopting a minimalist methodology to avoid over-design, we leverage established training recipes for both modalities by seamlessly integrating standard GRPO for reasoning and FlowGRPO for visual synthesis. To ensure scalability to multi-round interleaved generation, we introduce two critical modifications to the original FlowGRPO: (1) eliminating classifier-free guidance to maintain linear, unbranched rollouts, which is essential for scaling to complex scenarios involving multi-turn interactions and multi-condition generation (e.g., editing); and (2) replacing the standard latent KL penalty with an MSE penalty directly on the velocity fields, providing a more robust and direct regularization signal to mitigate reward hacking effectively. Our experiments demonstrate that this unified training recipe significantly enhances image generation quality through reasoning, providing a robust and scalable baseline for the future post-training of fully interleaved models.