Simplified and Generalized Masked Diffusion for Discrete Data

📅 2024-06-06
🏛️ arXiv.org
📈 Citations: 22
Influential: 12
📄 PDF
🤖 AI Summary
Existing masked diffusion models suffer from overly complex modeling, suboptimal training objectives, and reliance on heuristic hyperparameter tuning. This paper proposes a simplified, general-purpose masked diffusion framework for discrete data generation. First, it establishes that the continuous-time variational lower bound of such models is mathematically equivalent to a weighted integral of cross-entropy loss—enabling a unified, principled training objective. Second, it introduces a state-dependent masking schedule, eliminating redundant parameterization and ad hoc corrections. The method integrates continuous-time diffusion dynamics, discrete variational inference, and weighted cross-entropy optimization. Experiments demonstrate that our approach outperforms same-scale diffusion language models on OpenWebText; achieves state-of-the-art zero-shot performance on 4 out of 5 language modeling benchmarks; and attains 2.75/3.40 bits per dimension on CIFAR-10 and ImageNet 64×64 image modeling—surpassing same-scale autoregressive baselines.

Technology Category

Application Category

📝 Abstract
Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.75 (CIFAR-10) and 3.40 (ImageNet 64x64) bits per dimension that are better than autoregressive models of similar sizes. Our code is available at https://github.com/google-deepmind/md4.
Problem

Research questions and friction points this paper is trying to address.

Masked Diffusion Models
Complexity Optimization
Performance Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Diffusion Models
Continuous Time Optimization
Adaptive Adjustment
🔎 Similar Papers
No similar papers found.
J
Jiaxin Shi
Google DeepMind
Kehang Han
Kehang Han
Google DeepMind
LanguageReliabilityChemistry
Z
Zhe Wang
Google DeepMind
Arnaud Doucet
Arnaud Doucet
Google DeepMind
Computational StatisticsMachine LearningMonte Carlo methods
M
Michalis K. Titsias
Google DeepMind