MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hybrid autoregressive (AR)–diffusion models lack systematic principles for allocating modeling capacity between AR and diffusion components. Method: We propose MADFormer, the first framework to realize block-level vertical hybridization within a unified Transformer architecture: AR layers capture global inter-block dependencies, while diffusion layers perform iterative intra-block refinement. We uncover empirical principles governing AR–diffusion capacity allocation and introduce spatial tiling and partitioning strategies tailored for high-resolution image generation. Results: Experiments on FFHQ-1024 and ImageNet demonstrate that MADFormer achieves up to a 75% improvement in FID under compute constraints, significantly enhancing the quality–efficiency trade-off for 1024×1024 image synthesis.

Technology Category

Application Category

📝 Abstract
Recent progress in multimodal generation has increasingly combined autoregressive (AR) and diffusion-based approaches, leveraging their complementary strengths: AR models capture long-range dependencies and produce fluent, context-aware outputs, while diffusion models operate in continuous latent spaces to refine high-fidelity visual details. However, existing hybrids often lack systematic guidance on how and why to allocate model capacity between these paradigms. In this work, we introduce MADFormer, a Mixed Autoregressive and Diffusion Transformer that serves as a testbed for analyzing AR-diffusion trade-offs. MADFormer partitions image generation into spatial blocks, using AR layers for one-pass global conditioning across blocks and diffusion layers for iterative local refinement within each block. Through controlled experiments on FFHQ-1024 and ImageNet, we identify two key insights: (1) block-wise partitioning significantly improves performance on high-resolution images, and (2) vertically mixing AR and diffusion layers yields better quality-efficiency balances--improving FID by up to 75% under constrained inference compute. Our findings offer practical design principles for future hybrid generative models.
Problem

Research questions and friction points this paper is trying to address.

Allocating model capacity between AR and diffusion paradigms systematically
Improving high-resolution image generation via block-wise partitioning
Balancing quality and efficiency by mixing AR and diffusion layers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines AR and diffusion for image generation
Partitions image into spatial blocks processing
Vertically mixes AR and diffusion layers
🔎 Similar Papers
2024-08-22International Conference on Learning RepresentationsCitations: 292