🤖 AI Summary
This work investigates the fundamental efficiency trade-offs between diffusion language models (DLMs) and autoregressive language models (ARMs): while DLMs exhibit high arithmetic intensity and native support for parallel decoding, they suffer from poor long-context modeling capability and suboptimal batch throughput. To address this, we propose blockwise decoding—a method that partitions sequences into independently decodable blocks, preserving DLMs’ parallelism while explicitly enabling scalable long-context generation. We theoretically prove that reducing the number of sampling steps linearly decreases end-to-end latency. Through rigorous theoretical modeling and hardware-level empirical analysis, we quantify the inherent trade-offs among sequence-level parallelism, computational density, and batch throughput. Experiments show that blockwise decoding reduces DLM latency by up to 2.3× for contexts of 1K–8K tokens and boosts throughput to near-ARM levels—marking the first approach to bridge DLMs’ long-range modeling bottleneck without compromising parallelism.
📝 Abstract
Large Language Models (LLMs) have achieved state-of-the-art performance on a broad range of Natural Language Processing (NLP) tasks, including document processing and coding. Autoregressive Language Models (ARMs), which generate tokens sequentially conditioned on all previous tokens, have been the predominant paradigm for LLMs. However, while these networks have achieved high accuracy across a range of downstream tasks, they exhibit low arithmetic intensity due to the inherent sequential dependency with next-token prediction. Recently, Diffusion Language Models (DLMs) have emerged as a promising alternative architecture. DLMs generate output text in parallel, breaking the limitations of sequential dependency. However, the performance implications of DLMs relative to commonly deployed ARMs are not fully understood. In this work, we present a comprehensive performance study analyzing the performance characteristics of ARMs and DLMs, using both theoretical analysis and profiling data to characterize the trade-offs between these approaches. We illustrate that although DLMs exhibit higher arithmetic intensity compared to ARMs because of their capability to utilize parallelism across sequence lengths, they fail to scale effectively to longer contexts. We then explore DLMs with block-wise decoding, outlining how this approach allows for increased arithmetic intensity, while still scaling well to long contexts (similar to ARMs). We also show interesting trade-offs for batched inference, where we find that ARMs exhibit superior throughput, as they benefit more from parallelism across sequences in the batch. Finally, we highlight opportunities for accelerating DLM inference, and, in particular, highlight the importance of reducing the number of sampling steps for allowing open-source DLMs to provide improved latency relative to ARMs.