🤖 AI Summary
Masked diffusion language models (MDMs) exhibit significant performance disparities across evaluation metrics, yet their fundamental trade-offs between efficiency and accuracy remain theoretically uncharacterized.
Method: We establish the first rigorous theoretical framework for MDMs, integrating probabilistic modeling, information-theoretic bounds, and sampling complexity analysis to systematically characterize their intrinsic capability limits.
Results: We prove that under perplexity, MDMs achieve near-optimal performance in a constant number of steps—independent of sequence length—whereas under sequence error rate, sampling steps must scale linearly with length. This reveals the critical insight that parallel sampling does not universally improve efficiency, challenging prevailing intuitions. All theoretical findings are empirically validated across diverse architectures and datasets. Our work provides both a foundational theory and practical guidance for the design, analysis, and evaluation of diffusion-based language models.
📝 Abstract
Diffusion language models have emerged as a promising approach for text generation. One would naturally expect this method to be an efficient replacement for autoregressive models since multiple tokens can be sampled in parallel during each diffusion step. However, its efficiency-accuracy trade-off is not yet well understood. In this paper, we present a rigorous theoretical analysis of a widely used type of diffusion language model, the Masked Diffusion Model (MDM), and find that its effectiveness heavily depends on the target evaluation metric. Under mild conditions, we prove that when using perplexity as the metric, MDMs can achieve near-optimal perplexity in sampling steps regardless of sequence length, demonstrating that efficiency can be achieved without sacrificing performance. However, when using the sequence error rate--which is important for understanding the"correctness"of a sequence, such as a reasoning chain--we show that the required sampling steps must scale linearly with sequence length to obtain"correct"sequences, thereby eliminating MDM's efficiency advantage over autoregressive models. Our analysis establishes the first theoretical foundation for understanding the benefits and limitations of MDMs. All theoretical findings are supported by empirical studies.