STAR: Scale-wise Text-conditioned AutoRegressive image generation

📅 2024-06-16
📈 Citations: 14
Influential: 2
📄 PDF
🤖 AI Summary
To address detail distortion, structural instability, and weak text–image alignment in high-resolution (1024×1024) text-to-image generation, this paper proposes a multi-scale autoregressive generative framework. Methodologically, it introduces: (1) a novel scale-aware causal sampling mechanism to alleviate long-range dependency modeling; (2) normalized 2D Rotary Positional Encoding (RoPE) for consistent cross-scale positional awareness; and (3) integration of a pretrained CLIP text encoder with multi-scale patch tokenization to enhance fine-grained semantic alignment. The framework achieves state-of-the-art performance at 1024×1024 resolution, significantly improving image fidelity, text–image consistency, and aesthetic quality. It generates a single image in only 2.21 seconds on an A100 GPU—outperforming existing diffusion-based and autoregressive models in both speed and quality.

Technology Category

Application Category

📝 Abstract
We introduce STAR, a text-to-image model that employs a scale-wise auto-regressive paradigm. Unlike VAR, which is constrained to class-conditioned synthesis for images up to 256$ imes$256, STAR enables text-driven image generation up to 1024$ imes$1024 through three key designs. First, we introduce a pre-trained text encoder to extract and adopt representations for textual constraints, enhancing details and generalizability. Second, given the inherent structural correlation across different scales, we leverage 2D Rotary Positional Encoding (RoPE) and tweak it into a normalized version, ensuring consistent interpretation of relative positions across token maps and stabilizing the training process. Third, we observe that simultaneously sampling all tokens within a single scale can disrupt inter-token relationships, leading to structural instability, particularly in high-resolution generation. To address this, we propose a novel stable sampling method that incorporates causal relationships into the sampling process, ensuring both rich details and stable structures. Compared to previous diffusion models and auto-regressive models, STAR surpasses existing benchmarks in fidelity, text-image consistency, and aesthetic quality, requiring just 2.21s for 1024$ imes$1024 images on A100. This highlights the potential of auto-regressive methods in high-quality image synthesis, offering new directions for the text-to-image generation.
Problem

Research questions and friction points this paper is trying to address.

Text-conditioned high-resolution image generation
Stable auto-regressive sampling method
Enhanced text-image consistency and quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-wise autoregressive image generation
2D Rotary Positional Encoding
Causal stable sampling method
🔎 Similar Papers
No similar papers found.
Xiaoxiao Ma
Xiaoxiao Ma
Oracle, Macquarie University
LLMdeep generative modelsanomaly detectiongraph neural networks
Mohan Zhou
Mohan Zhou
Harbin Institute of Technology
Representation LearningImage Recognition
T
Tao Liang
Du Xiaoman
Y
Yalong Bai
Du Xiaoman
T
Tiejun Zhao
Harbin Institute of Technology
H
H. Chen
University of Science and Technology of China
Y
Yi Jin
University of Science and Technology of China