Group Critical-token Policy Optimization for Autoregressive Image Generation

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In existing autoregressive vision generation, RLVR optimizes all image tokens uniformly, ignoring their heterogeneous contributions to generation quality. Method: This paper proposes a critical-token-aware policy optimization framework that jointly identifies critical tokens along three dimensions—causal dependency, entropy-induced spatial structure, and diversity—and introduces a confidence-divergence-based dynamic per-token advantage weighting mechanism to enable differentiated policy updates. Contribution/Results: By focusing gradient optimization on high-impact tokens within the RLVR framework, our method achieves superior performance using only 30% of the tokens required by full-token optimization. It outperforms GRPO across multiple text-to-image benchmarks, significantly improving both generation fidelity and training efficiency.

Technology Category

Application Category

📝 Abstract
Recent studies have extended Reinforcement Learning with Verifiable Rewards (RLVR) to autoregressive (AR) visual generation and achieved promising progress. However, existing methods typically apply uniform optimization across all image tokens, while the varying contributions of different image tokens for RLVR's training remain unexplored. In fact, the key obstacle lies in how to identify more critical image tokens during AR generation and implement effective token-wise optimization for them. To tackle this challenge, we propose $ extbf{G}$roup $ extbf{C}$ritical-token $ extbf{P}$olicy $ extbf{O}$ptimization ($ extbf{GCPO}$), which facilitates effective policy optimization on critical tokens. We identify the critical tokens in RLVR-based AR generation from three perspectives, specifically: $ extbf{(1)}$ Causal dependency: early tokens fundamentally determine the later tokens and final image effect due to unidirectional dependency; $ extbf{(2)}$ Entropy-induced spatial structure: tokens with high entropy gradients correspond to image structure and bridges distinct visual regions; $ extbf{(3)}$ RLVR-focused token diversity: tokens with low visual similarity across a group of sampled images contribute to richer token-level diversity. For these identified critical tokens, we further introduce a dynamic token-wise advantage weight to encourage exploration, based on confidence divergence between the policy model and reference model. By leveraging 30% of the image tokens, GCPO achieves better performance than GRPO with full tokens. Extensive experiments on multiple text-to-image benchmarks for both AR models and unified multimodal models demonstrate the effectiveness of GCPO for AR visual generation.
Problem

Research questions and friction points this paper is trying to address.

Identifies critical tokens in autoregressive image generation
Optimizes policy for tokens with causal dependency and entropy
Enhances token diversity using dynamic advantage weighting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies critical tokens using causal dependency
Uses entropy gradients to locate structural tokens
Applies dynamic advantage weights for exploration
🔎 Similar Papers
No similar papers found.