Scale-Wise VAR is Secretly Discrete Diffusion

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes, for the first time, a rigorous theoretical equivalence between autoregressive transformers (VARs) equipped with Markovian attention masks and discrete diffusion models. Addressing inherent architectural redundancy and suboptimal optimization in VARs, we propose the Scalable Refinement and Discrete Diffusion (SRDD) framework: it reformulates VARs as iterative discrete diffusion processes, replaces redundant autoregressive decoding with lightweight refinement modules, and incorporates the progressive optimization mechanism of diffusion models. Experiments on ImageNet and CIFAR-10 demonstrate that SRDD significantly accelerates training convergence and improves zero-shot reconstruction fidelity. Specifically, it reduces inference latency by 37% and improves FID by 12.6%, achieving—within the autoregressive paradigm—the first simultaneous realization of high computational efficiency and high-fidelity image generation.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) transformers have emerged as a powerful paradigm for visual generation, largely due to their scalability, computational efficiency and unified architecture with language and vision. Among them, next scale prediction Visual Autoregressive Generation (VAR) has recently demonstrated remarkable performance, even surpassing diffusion-based models. In this work, we revisit VAR and uncover a theoretical insight: when equipped with a Markovian attention mask, VAR is mathematically equivalent to a discrete diffusion. We term this reinterpretation as Scalable Visual Refinement with Discrete Diffusion (SRDD), establishing a principled bridge between AR transformers and diffusion models. Leveraging this new perspective, we show how one can directly import the advantages of diffusion such as iterative refinement and reduce architectural inefficiencies into VAR, yielding faster convergence, lower inference cost, and improved zero-shot reconstruction. Across multiple datasets, we show that the diffusion based perspective of VAR leads to consistent gains in efficiency and generation.
Problem

Research questions and friction points this paper is trying to address.

VAR transformers are mathematically equivalent to discrete diffusion models
Bridging autoregressive transformers with diffusion model advantages
Improving generation efficiency and convergence through diffusion perspective
Innovation

Methods, ideas, or system contributions that make the work stand out.

VAR uses Markovian attention mask
Mathematically equivalent to discrete diffusion
Imports iterative refinement from diffusion
🔎 Similar Papers
No similar papers found.