Parallelism and Generation Order in Masked Diffusion Language Models: Limits Today, Potential Tomorrow

πŸ“… 2026-01-22
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the practical capabilities of Masked Diffusion Language Models (MDLMs) in parallel generation and arbitrary-order decoding, as well as the reasons for their underperformance relative to autoregressive models. Using Average Final Parallelism (AFP) and Kendall’s tau, the authors systematically evaluate the parallel strength and decoding behavior of eight prominent MDLMs across 58 knowledge, reasoning, and programming tasks. The findings reveal that MDLMs generally underperform due to weakened inter-token dependencies, yet exhibit advantages in tasks requiring backward-looking information integration, such as Sudoku. To address this limitation, the work proposes a Generate-then-Edit paradigm that preserves parallel efficiency while enhancing dependency modeling, effectively mitigating information loss during generation.

Technology Category

Application Category

πŸ“ Abstract
Masked Diffusion Language Models (MDLMs) promise parallel token generation and arbitrary-order decoding, yet it remains unclear to what extent current models truly realize these capabilities. We characterize MDLM behavior along two dimensions -- parallelism strength and generation order -- using Average Finalization Parallelism (AFP) and Kendall's tau. We evaluate eight mainstream MDLMs (up to 100B parameters) on 58 benchmarks spanning knowledge, reasoning, and programming. The results show that MDLMs still lag behind comparably sized autoregressive models, mainly because parallel probabilistic modeling weakens inter-token dependencies. Meanwhile, MDLMs exhibit adaptive decoding behavior: their parallelism and generation order vary significantly with the task domain, the stage of reasoning, and whether the output is correct. On tasks that require"backward information"(e.g., Sudoku), MDLMs adopt a solution order that tends to fill easier Sudoku blanks first, highlighting their advantages. Finally, we provide theoretical motivation and design insights supporting a Generate-then-Edit paradigm, which mitigates dependency loss while retaining the efficiency of parallel decoding.
Problem

Research questions and friction points this paper is trying to address.

Masked Diffusion Language Models
parallelism
generation order
token dependencies
autoregressive models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Diffusion Language Models
parallel token generation
arbitrary-order decoding
Generate-then-Edit
Average Finalization Parallelism
πŸ”Ž Similar Papers
No similar papers found.
Y
Yangyang Zhong
Zhejiang University, Ant Group
Y
Yanmei Gu
Ant Group
Z
Zhengqing Zang
Zhejiang University, Ant Group
Xiaomeng Li
Xiaomeng Li
Assistant Professor, The Hong Kong University of Science and Technology
Medical Image AnalysisAI in HealthcareDeep Learning
Y
Yuqi Ding
Ant Group, University of Chinese Academy of Social Sciences
X
Xibei Jia
Zhejiang University, Ant Group
Y
Yuting Shen
Ant Group, Shanghai Jiao Tong University
Zhenzhong Lan
Zhenzhong Lan
School of Engineering, Westlake University
NLPComputer VisionMultimedia
L
Liwang Zhu
Ant Group
W
Weiping Liu
Ant Group
Junlin Zhou
Junlin Zhou
Associate Professor of Computer Science, Uninversity of Electronic Science and Technology of China
Recommender SystemData MiningBig Data Analyze
H
Haisheng Liu
Ant Group
Z
Zhong Xin Yu
Ant Group
P
Pengxin Luo
Zhejiang University
Donglian Qi
Donglian Qi
Zhejiang University
Power systemsControl
Y
Yunfeng Yan
Zhejiang University
Junbo Zhao
Junbo Zhao
Zhejiang University, ZJU100 Young Professor
AILLMs