Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of aligning visual autoregressive (VAR) models with fine-grained human preferences—a limitation in existing approaches. We propose the first reinforcement learning fine-tuning framework for VAR models based on Group Relative Policy Optimization (GRPO). Methodologically, we design a multi-objective reward function integrating CLIP-based semantic guidance and an aesthetic predictor to jointly optimize image quality, style controllability, and cross-distribution generalization. Our key contributions are: (1) the first application of GRPO to VAR fine-tuning, yielding significantly improved training stability and sample efficiency; (2) high-fidelity generation under unseen artistic style prompts, thereby overcoming constraints imposed by pretraining data distribution; and (3) full exploitation of VAR’s autoregressive structure, achieving substantially lower sampling latency compared to diffusion models. Extensive experiments demonstrate superior performance in image fidelity, style consistency, and inference efficiency.

Technology Category

Application Category

📝 Abstract
Fine-tuning pre-trained generative models with Reinforcement Learning (RL) has emerged as an effective approach for aligning outputs more closely with nuanced human preferences. In this paper, we investigate the application of Group Relative Policy Optimization (GRPO) to fine-tune next-scale visual autoregressive (VAR) models. Our empirical results demonstrate that this approach enables alignment to intricate reward signals derived from aesthetic predictors and CLIP embeddings, significantly enhancing image quality and enabling precise control over the generation style. Interestingly, by leveraging CLIP, our method can help VAR models generalize beyond their initial ImageNet distribution: through RL-driven exploration, these models can generate images aligned with prompts referencing image styles that were absent during pre-training. In summary, we show that RL-based fine-tuning is both efficient and effective for VAR models, benefiting particularly from their fast inference speeds, which are advantageous for online sampling, an aspect that poses significant challenges for diffusion-based alternatives.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning visual autoregressive models with RL for human preference alignment
Enhancing image quality and style control via GRPO and CLIP
Enabling generation beyond pre-trained distributions through RL exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning VAR models with GRPO
Aligning outputs using aesthetic and CLIP rewards
Enhancing image quality and style control
🔎 Similar Papers
No similar papers found.