Unified Multimodal Discrete Diffusion

๐Ÿ“… 2025-03-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses key limitations of autoregressive (AR) multimodal modelsโ€”namely, the quality-diversity trade-off, lack of cross-modal bidirectional editing, and insufficient controllability in generation. We propose UniDisc, the first unified text-image discrete diffusion model. Methodologically, we pioneer the extension of discrete diffusion to joint text-image modeling: we discretize both modalities via ViT-based image tokenization and standard text tokenizers; introduce cross-modal attention for alignment and conditional guidance during sampling; and design a progressive denoising schedule. Our contributions include support for joint text-image generation, zero-shot bidirectional editing, multi-granularity inpainting, and fine-grained controllable generation. Empirically, UniDisc outperforms AR baselines on text-to-image generation, image captioning, and visual question answering, reduces inference FLOPs by 37%, and enables flexible quality-speed trade-offs.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal generative models that can understand and generate across multiple modalities are dominated by autoregressive (AR) approaches, which process tokens sequentially from left to right, or top to bottom. These models jointly handle images, text, video, and audio for various tasks such as image captioning, question answering, and image generation. In this work, we explore discrete diffusion models as a unified generative formulation in the joint text and image domain, building upon their recent success in text generation. Discrete diffusion models offer several advantages over AR models, including improved control over quality versus diversity of generated samples, the ability to perform joint multimodal inpainting (across both text and image domains), and greater controllability in generation through guidance. Leveraging these benefits, we present the first Unified Multimodal Discrete Diffusion (UniDisc) model which is capable of jointly understanding and generating text and images for a variety of downstream tasks. We compare UniDisc to multimodal AR models, performing a scaling analysis and demonstrating that UniDisc outperforms them in terms of both performance and inference-time compute, enhanced controllability, editability, inpainting, and flexible trade-off between inference time and generation quality. Code and additional visualizations are available at https://unidisc.github.io.
Problem

Research questions and friction points this paper is trying to address.

Unified generative model for text and images
Improving control over quality and diversity
Enhancing multimodal inpainting and controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Multimodal Discrete Diffusion model
Joint text and image generation
Enhanced controllability and editability
๐Ÿ”Ž Similar Papers
No similar papers found.