Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete diffusion models conventionally rely on progressive denoising, yet masking diffusion empirically achieves superior performance. This work identifies the root cause: masking diffusion explicitly models the known jump-time distribution inherent in the discrete Markov process, bypassing the restrictive progressive assumption. To formalize this insight, we propose Scheduling-Conditional Unidirectional Diffusion (SCUD), a general framework that conditions any discrete diffusion process on the jump-time distribution, enabling principled integration of prior knowledge about transition dynamics. SCUD abandons progressive denoising, supports flexible, structured noise scheduling, and is broadly applicable across modalities—including images, text, and proteins. Extensive experiments demonstrate that SCUD consistently outperforms masking diffusion on multiple benchmarks, delivering substantial improvements in generation quality—particularly for discrete diffusion models incorporating inductive biases.

Technology Category

Application Category

📝 Abstract
Discrete diffusion models, like continuous diffusion models, generate high-quality samples by gradually undoing noise applied to datapoints with a Markov process. Gradual generation in theory comes with many conceptual benefits; for example, inductive biases can be incorporated into the noising Markov process, and access to improved sampling algorithms. In practice, however, the consistently best performing discrete diffusion model is, surprisingly, masking diffusion, which does not denoise gradually. Here we explain the superior performance of masking diffusion by noting that it makes use of a fundamental difference between continuous and discrete Markov processes: discrete Markov processes evolve by discontinuous jumps at a fixed rate and, unlike other discrete diffusion models, masking diffusion builds in the known distribution of jump times and only learns where to jump to. We show that we can similarly bake in the known distribution of jump times into any discrete diffusion model. The resulting models - schedule-conditioned discrete diffusion (SCUD) - generalize classical discrete diffusion and masking diffusion. By applying SCUD to models with noising processes that incorporate inductive biases on images, text, and protein data, we build models that outperform masking.
Problem

Research questions and friction points this paper is trying to address.

Explains why masking diffusion outperforms other discrete diffusion models
Introduces schedule-conditioned discrete diffusion (SCUD) for improved performance
Applies SCUD to enhance models for images, text, and protein data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Schedule-conditioned discrete diffusion (SCUD) generalizes classical models
Incorporates known jump time distribution into discrete diffusion
Outperforms masking diffusion with inductive biases
🔎 Similar Papers
No similar papers found.