π€ AI Summary
This study investigates the practical capabilities of Masked Diffusion Language Models (MDLMs) in parallel generation and arbitrary-order decoding, as well as the reasons for their underperformance relative to autoregressive models. Using Average Final Parallelism (AFP) and Kendallβs tau, the authors systematically evaluate the parallel strength and decoding behavior of eight prominent MDLMs across 58 knowledge, reasoning, and programming tasks. The findings reveal that MDLMs generally underperform due to weakened inter-token dependencies, yet exhibit advantages in tasks requiring backward-looking information integration, such as Sudoku. To address this limitation, the work proposes a Generate-then-Edit paradigm that preserves parallel efficiency while enhancing dependency modeling, effectively mitigating information loss during generation.
π Abstract
Masked Diffusion Language Models (MDLMs) promise parallel token generation and arbitrary-order decoding, yet it remains unclear to what extent current models truly realize these capabilities. We characterize MDLM behavior along two dimensions -- parallelism strength and generation order -- using Average Finalization Parallelism (AFP) and Kendall's tau. We evaluate eight mainstream MDLMs (up to 100B parameters) on 58 benchmarks spanning knowledge, reasoning, and programming. The results show that MDLMs still lag behind comparably sized autoregressive models, mainly because parallel probabilistic modeling weakens inter-token dependencies. Meanwhile, MDLMs exhibit adaptive decoding behavior: their parallelism and generation order vary significantly with the task domain, the stage of reasoning, and whether the output is correct. On tasks that require"backward information"(e.g., Sudoku), MDLMs adopt a solution order that tends to fill easier Sudoku blanks first, highlighting their advantages. Finally, we provide theoretical motivation and design insights supporting a Generate-then-Edit paradigm, which mitigates dependency loss while retaining the efficiency of parallel decoding.