Accelerated Sampling from Masked Diffusion Models via Entropy Bounded Unmasking

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sampling efficiency of Masked Diffusion Models (MDMs), this paper proposes EB-Sampler: an entropy-constrained dynamic multi-token unmasking mechanism. The key insight is the first identification that certain masked sequences in MDMs permit deterministic inference of multiple unknown tokens. Leveraging this, we design an adaptive sampler with provable error bounds—employing entropy-driven dynamic unmasking and error-analysis-guided step-size control to decode multiple deterministic tokens per model call. On code generation and mathematical reasoning benchmarks, EB-Sampler achieves 2–3× speedup in sampling with no performance degradation. Furthermore, it generalizes effectively to small-scale structured reasoning tasks—including maze navigation and Sudoku—where it significantly outperforms standard autoregressive models.

Technology Category

Application Category

📝 Abstract
Recent masked diffusion models (MDMs) have shown competitive performance compared to autoregressive models (ARMs) for language modeling. While most literature has focused on performance enhancing sampling procedures, efficient sampling from MDMs has been scarcely explored. We make the observation that often a given sequence of partially masked tokens determines the values of multiple unknown tokens deterministically, meaning that a single prediction of a masked model holds additional information unused by standard sampling procedures. Based on this observation, we introduce EB-Sampler, a simple drop-in replacement for existing samplers, utilizing an Entropy Bounded unmasking procedure that dynamically unmasks multiple tokens in one function evaluation with predefined approximate error tolerance. We formulate the EB-Sampler as part of a broad family of adaptive samplers for which we provide an error analysis that motivates our algorithmic choices. EB-Sampler accelerates sampling from current state of the art MDMs by roughly 2-3x on standard coding and math reasoning benchmarks without loss in performance. We also validate the same procedure works well on smaller reasoning tasks including maze navigation and Sudoku, tasks ARMs often struggle with.
Problem

Research questions and friction points this paper is trying to address.

Enhancing sampling efficiency in masked diffusion models
Deterministic prediction of multiple tokens from partial sequences
Accelerating MDM sampling without performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy Bounded unmasking for dynamic token prediction
Adaptive samplers with predefined error tolerance
Accelerates sampling 2-3x without performance loss
🔎 Similar Papers
No similar papers found.