FocusDiff: Advancing Fine-Grained Text-Image Alignment for Autoregressive Visual Generation through RL

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive text-to-image models exhibit significant limitations in fine-grained text–image alignment, failing to achieve precise token-level visual control—particularly when distinguishing syntactically similar prompts with subtle semantic differences. This work proposes an autoregressive generation framework explicitly designed for fine-grained alignment. First, we introduce PairComp, the first benchmark dedicated to evaluating local semantic alignment capability. Second, we design a PPO-based, difference-aware alignment mechanism that enables token-level control driven directly by local semantic discrepancies—a novel paradigm in autoregressive vision-language generation. Third, we integrate contrastive text pair construction with autoregressive visual token modeling to enhance discriminative learning. Our method achieves state-of-the-art performance on mainstream benchmarks and improves fine-grained alignment accuracy by 32.7% over the strongest baseline on PairComp.

Technology Category

Application Category

📝 Abstract
Recent studies extend the autoregression paradigm to text-to-image generation, achieving performance comparable to diffusion models. However, our new PairComp benchmark -- featuring test cases of paired prompts with similar syntax but different fine-grained semantics -- reveals that existing models struggle with fine-grained text-image alignment thus failing to realize precise control over visual tokens. To address this, we propose FocusDiff, which enhances fine-grained text-image semantic alignment by focusing on subtle differences between similar text-image pairs. We construct a new dataset of paired texts and images with similar overall expressions but distinct local semantics, further introducing a novel reinforcement learning algorithm to emphasize such fine-grained semantic differences for desired image generation. Our approach achieves state-of-the-art performance on existing text-to-image benchmarks and significantly outperforms prior methods on PairComp.
Problem

Research questions and friction points this paper is trying to address.

Improves fine-grained text-image alignment in autoregressive models
Addresses failure in precise control over visual tokens
Enhances semantic alignment for similar text-image pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

FocusDiff enhances fine-grained text-image alignment
Uses paired texts and images dataset
Introduces novel reinforcement learning algorithm
🔎 Similar Papers
No similar papers found.