Divide and Conquer: Accelerating Diffusion-Based Large Language Models via Adaptive Parallel Decoding

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although diffusion-based large language models (dLLMs) hold promise for parallel generation, existing approaches still rely on autoregressive token-by-token decoding to maintain output quality, thereby failing to realize their theoretical parallelism. This work proposes DiCo, an adaptive multi-token parallel decoding method grounded in a divide-and-conquer paradigm. DiCo dynamically constructs local clusters and alternates among partitioning, conquering, and finalization phases to enable fine-grained composite decoding. It is the first approach to achieve high-quality, adaptive parallel inference in dLLMs, significantly accelerating text generation across diverse tasks while preserving output quality comparable to that of sequential token-by-token decoding. By doing so, DiCo effectively bridges the gap between the theoretical parallelism inherent in dLLMs and their practical inference performance.

Technology Category

Application Category

📝 Abstract
Diffusion-based large language models (dLLMs) have shown promising performance across various reasoning tasks, establishing themselves as an alternative to autoregressive large language models (LLMs). Unlike autoregressive LLMs that generate one token per step based on all previous tokens, dLLMs theoretically enable parallel generation of multiple tokens at each decoding step. However, recent dLLMs still favor one-token-per-step generation in practice, as directly decoding multiple masked tokens often leads to degraded generation quality and stability. This reveals a substantial gap between the theoretical parallelism and practical performance of dLLMs. To bridge this gap, we introduce an adaptive parallel decoding approach, namely DiCo, which features a three-phase divide-and-conquer paradigm to unleash the inherent parallelism of dLLMs. During the Divide phase, DiCo first explores the input masked sequence and identifies masked tokens as seed tokens, which are then expanded to construct a set of local clusters. During the Conquer phase, DiCo performs parallel decoding across different local clusters constructed in the Divide phase. The divide-and-conquer process repeatedly alternates between the Divide and Conquer phases until convergence. During the Finalize phase, DiCo decodes the remaining few masked tokens using an effective fine-grained compound decoding scheme to finalize the generation. Extensive experiments demonstrate that DiCo can achieve significant inference speedups while maintaining competitive generation quality.
Problem

Research questions and friction points this paper is trying to address.

diffusion-based LLMs
parallel decoding
generation quality
inference speedup
masked token generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion-based LLMs
adaptive parallel decoding
divide-and-conquer
masked token generation
inference acceleration
🔎 Similar Papers
No similar papers found.