🤖 AI Summary
This paper addresses the challenge of aligning visual autoregressive (VAR) models with fine-grained human preferences—a limitation in existing approaches. We propose the first reinforcement learning fine-tuning framework for VAR models based on Group Relative Policy Optimization (GRPO). Methodologically, we design a multi-objective reward function integrating CLIP-based semantic guidance and an aesthetic predictor to jointly optimize image quality, style controllability, and cross-distribution generalization. Our key contributions are: (1) the first application of GRPO to VAR fine-tuning, yielding significantly improved training stability and sample efficiency; (2) high-fidelity generation under unseen artistic style prompts, thereby overcoming constraints imposed by pretraining data distribution; and (3) full exploitation of VAR’s autoregressive structure, achieving substantially lower sampling latency compared to diffusion models. Extensive experiments demonstrate superior performance in image fidelity, style consistency, and inference efficiency.
📝 Abstract
Fine-tuning pre-trained generative models with Reinforcement Learning (RL) has emerged as an effective approach for aligning outputs more closely with nuanced human preferences. In this paper, we investigate the application of Group Relative Policy Optimization (GRPO) to fine-tune next-scale visual autoregressive (VAR) models. Our empirical results demonstrate that this approach enables alignment to intricate reward signals derived from aesthetic predictors and CLIP embeddings, significantly enhancing image quality and enabling precise control over the generation style. Interestingly, by leveraging CLIP, our method can help VAR models generalize beyond their initial ImageNet distribution: through RL-driven exploration, these models can generate images aligned with prompts referencing image styles that were absent during pre-training. In summary, we show that RL-based fine-tuning is both efficient and effective for VAR models, benefiting particularly from their fast inference speeds, which are advantageous for online sampling, an aspect that poses significant challenges for diffusion-based alternatives.