DPAR: Dynamic Patchification for Efficient Autoregressive Visual Generation

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitive computational and memory overhead in high-resolution autoregressive image generation—caused by quadratic growth of token count with resolution—this paper proposes an entropy-aware dynamic patching mechanism. It introduces the prediction entropy of a lightweight, unsupervised autoregressive model as an information-driven criterion to adaptively merge tokens into variable-sized image patches. The method supports dynamic patching during training and seamless inference-time scaling to larger patch sizes, while remaining fully compatible with standard decoder-only architectures. Key components include entropy-guided token aggregation, variable-length patch embedding, and multi-scale robust representation learning. On ImageNet, our approach reduces token counts by 1.81× and 2.06× at 256×256 and 384×384 resolutions, respectively, cuts training FLOPs by up to 40%, accelerates convergence, and improves FID by 27.1% over the baseline.

Technology Category

Application Category

📝 Abstract
Decoder-only autoregressive image generation typically relies on fixed-length tokenization schemes whose token counts grow quadratically with resolution, substantially increasing the computational and memory demands of attention. We present DPAR, a novel decoder-only autoregressive model that dynamically aggregates image tokens into a variable number of patches for efficient image generation. Our work is the first to demonstrate that next-token prediction entropy from a lightweight and unsupervised autoregressive model provides a reliable criterion for merging tokens into larger patches based on information content. DPAR makes minimal modifications to the standard decoder architecture, ensuring compatibility with multimodal generation frameworks and allocating more compute to generation of high-information image regions. Further, we demonstrate that training with dynamically sized patches yields representations that are robust to patch boundaries, allowing DPAR to scale to larger patch sizes at inference. DPAR reduces token count by 1.81x and 2.06x on Imagenet 256 and 384 generation resolution respectively, leading to a reduction of up to 40% FLOPs in training costs. Further, our method exhibits faster convergence and improves FID by up to 27.1% relative to baseline models.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational and memory demands in autoregressive image generation
Dynamically aggregates tokens into variable-sized patches for efficiency
Improves training efficiency and image quality with adaptive token merging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic patch aggregation based on entropy
Minimal decoder modifications for compatibility
Training with variable patches for scalability
🔎 Similar Papers
No similar papers found.