🤖 AI Summary
Diffusion language models (DLMs) suffer from significantly lower end-to-end inference throughput than autoregressive (AR) models, yet existing efficiency evaluation methodologies exhibit systemic flaws—neglecting hardware bottlenecks, batch-size scaling effects, and inadequate modeling of decoding parallelism.
Method: We establish a comprehensive empirical benchmark across multiple DLMs and hardware platforms, augmented by roofline model–based theoretical throughput analysis to quantify compute utilization bottlenecks across batch sizes.
Contribution/Results: Our analysis reveals that acceleration techniques—such as dual-cache scheduling and parallel denoising—deliver diminishing returns beyond small batch sizes, with throughput gains collapsing under large-batch regimes. Crucially, no open-source DLM consistently surpasses AR models in end-to-end throughput. We propose a robust, hardware-aware efficiency evaluation framework for DLMs, providing both theoretical grounding and an empirically validated benchmark to guide co-design of architectures and systems.
📝 Abstract
Diffusion language models (DLMs) have emerged as a promising alternative to the long-dominant autoregressive (AR) paradigm, offering a parallelable decoding process that could yield greater efficiency. Yet, in practice, current open-source DLMs often underperform their AR counterparts in speed, limiting their real-world utility. This work presents a systematic study of DLM efficiency, identifying key issues in prior evaluation methods. Through empirical benchmarking and a roofline-based theoretical analysis, we demonstrate that AR models generally achieve higher throughput, while DLMs consistently lag. We also investigate acceleration strategies, finding that techniques like dual cache and parallel decoding mainly offer gains at small batch sizes, with their benefits diminishing upon scaling. Our findings underscore the necessity of robust evaluation methods and improved acceleration strategies to advance research on DLMs.