UNCAGE: Contrastive Attention Guidance for Masked Generative Transformers in Text-to-Image Generation

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-image (T2I) generation, masked generative Transformers suffer from inherent limitations in compositional reasoning and attribute-object alignment, leading to semantic misalignment between text and image. To address this, we propose a training-free contrastive attention-guided decoding method: leveraging cross-modal contrastive attention maps, it dynamically schedules token decoding order—prioritizing tokens corresponding to concrete objects—to enhance structural consistency and semantic binding fidelity. Integrated into standard masked generative Transformer architectures, our method enables efficient parallel decoding without architectural modification. Evaluated on multiple benchmarks—including COCO and Flickr30K—it consistently improves FID, CLIP-Score, and fine-grained alignment metrics, demonstrating significant gains in text-image alignment while incurring minimal inference overhead.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) generation has been actively studied using Diffusion Models and Autoregressive Models. Recently, Masked Generative Transformers have gained attention as an alternative to Autoregressive Models to overcome the inherent limitations of causal attention and autoregressive decoding through bidirectional attention and parallel decoding, enabling efficient and high-quality image generation. However, compositional T2I generation remains challenging, as even state-of-the-art Diffusion Models often fail to accurately bind attributes and achieve proper text-image alignment. While Diffusion Models have been extensively studied for this issue, Masked Generative Transformers exhibit similar limitations but have not been explored in this context. To address this, we propose Unmasking with Contrastive Attention Guidance (UNCAGE), a novel training-free method that improves compositional fidelity by leveraging attention maps to prioritize the unmasking of tokens that clearly represent individual objects. UNCAGE consistently improves performance in both quantitative and qualitative evaluations across multiple benchmarks and metrics, with negligible inference overhead. Our code is available at https://github.com/furiosa-ai/uncage.
Problem

Research questions and friction points this paper is trying to address.

Improving text-to-image alignment in Masked Generative Transformers
Enhancing attribute binding for compositional image generation
Addressing limitations of bidirectional attention in parallel decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Generative Transformers with bidirectional attention
Contrastive Attention Guidance for token prioritization
Training-free method for improved compositional fidelity