MDM-Prime-v2: Binary Encoding and Index Shuffling Enable Compute-optimal Scaling of Diffusion Language Models

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of principled guidance in subword token granularity selection within the MDM-Prime framework and the significant degradation in likelihood estimation performance when combined with BPE tokenizers. To overcome these limitations, the authors propose MDM-Prime-v2, which incorporates binary encoding and index shuffling mechanisms to optimize the variational lower bound of masked diffusion language models. The method preserves the advantages of diffusion modeling while substantially improving both computational efficiency and language modeling performance. Evaluated on OpenWebText, MDM-Prime-v2 achieves a perplexity of 7.77 and demonstrates a 21.8× speedup in computation over autoregressive baselines. Moreover, at a scale of 1.1 billion parameters, it exhibits strong zero-shot commonsense reasoning capabilities, surpassing the constraints of conventional autoregressive and prior diffusion-based language models.

Technology Category

Application Category

📝 Abstract
Masked diffusion models (MDM) exhibit superior generalization when learned using a Partial masking scheme (Prime). This approach converts tokens into sub-tokens and models the diffusion process at the sub-token level. We identify two limitations of the MDM-Prime framework. First, we lack tools to guide the hyperparameter choice of the token granularity in the subtokenizer. Second, we find that the function form of the subtokenizer significantly degrades likelihood estimation when paired with commonly used Byte-Pair-Encoding (BPE) tokenizers. To address these limitations, we study the tightness of the variational bound in MDM-Prime and develop MDM-Prime-v2, a masked diffusion language model which incorporates Binary Encoding and Index Shuffling. Our scaling analysis reveals that MDM-Prime-v2 is 21.8$\times$ more compute-efficient than autoregressive models (ARM). In compute-optimal comparisons, MDM-Prime-v2 achieves 7.77 perplexity on OpenWebText, outperforming ARM (12.99), MDM (18.94), and MDM-Prime (13.41). When extending the model size to 1.1B parameters, our model further demonstrates superior zero-shot accuracy on various commonsense reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

masked diffusion models
sub-tokenization
likelihood estimation
token granularity
Byte-Pair-Encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binary Encoding
Index Shuffling
Masked Diffusion Models
Compute-optimal Scaling
Sub-token Diffusion
🔎 Similar Papers
No similar papers found.