🤖 AI Summary
This paper challenges the foundational rationale of Masked Diffusion Models (MDMs) for discrete sequence generation, demonstrating that their training and sampling procedures are theoretically time-agnostic and mathematically equivalent to Masked Language Models—undermining their characterization as diffusion models. It further identifies an inherent numerical bias in classification-based sampling under 32-bit floating-point arithmetic, causing artificially reduced effective temperature and diminished token diversity, thereby distorting generative quality evaluation. Method: To address inefficiency, we propose the First-Hit Sampler (FHS), which bypasses redundant time steps, achieving a 20× speedup. Contribution/Results: Through rigorous theoretical equivalence proofs, floating-point error modeling, and unified framework derivation, we show that MDMs lack robust empirical or theoretical justification for superior text generation performance over autoregressive models. We argue that current evaluation protocols for discrete generation tasks are fundamentally unfair and call for a principled redefinition of benchmarking standards.
📝 Abstract
Masked diffusion models (MDMs) have emerged as a popular research topic for generative modeling of discrete data, thanks to their superior performance over other discrete diffusion models, and are rivaling the auto-regressive models (ARMs) for language modeling tasks. The recent effort in simplifying the masked diffusion framework further leads to alignment with continuous-space diffusion models and more principled training and sampling recipes. In this paper, however, we reveal that both training and sampling of MDMs are theoretically free from the time variable, arguably the key signature of diffusion models, and are instead equivalent to masked models. The connection on the sampling aspect is drawn by our proposed first-hitting sampler (FHS). Specifically, we show that the FHS is theoretically equivalent to MDMs' original generation process while significantly alleviating the time-consuming categorical sampling and achieving a 20$ imes$ speedup. In addition, our investigation raises doubts about whether MDMs can truly beat ARMs in text generation. We identify, for the first time, an underlying numerical issue, even with the commonly used 32-bit floating-point precision, which results in inaccurate categorical sampling. We show that it lowers the effective temperature both theoretically and empirically, and the resulting decrease in token diversity makes previous evaluations, which assess the generation quality solely through the incomplete generative perplexity metric, somewhat unfair.