LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low parallelism (only 1–3 tokens per forward pass) in diffusion-based large language models (dLLMs) severely limits inference throughput. To address this, we propose LoPA—a training-free, plug-and-play look-ahead parallel decoding algorithm. LoPA is the first to identify and exploit the high sensitivity of dLLM parallelism to token-filling order (TFO); it introduces a dynamic TFO selection mechanism guided by branch confidence scores. Furthermore, LoPA implements Branch Parallelism—a multi-GPU inference system enabling ultra-wide parallelism (>10 tokens per forward pass). Evaluated on GSM8K, LoPA boosts D2F-Dream’s tokens-per-forward from ≤3 to 10.1—substantially outperforming the Dream baseline—while achieving a single-sample throughput of 1073.9 tokens/s. These results demonstrate LoPA’s effectiveness, efficiency, and scalability for dLLM inference.

Technology Category

Application Category

📝 Abstract
Diffusion Large Language Models (dLLMs) have demonstrated significant potential for high-speed inference. However, current confidence-driven decoding strategies are constrained by limited parallelism, typically achieving only 1--3 tokens per forward pass (TPF). In this work, we identify that the degree of parallelism during dLLM inference is highly sensitive to the Token Filling Order (TFO). Then, we introduce Lookahead PArallel Decoding LoPA, a training-free, plug-and-play algorithm, to identify a superior TFO and hence accelerate inference. LoPA concurrently explores distinct candidate TFOs via parallel branches, and selects the one with the highest potential for future parallelism based on branch confidence. We apply LoPA to the state-of-the-art D2F model and observe a substantial enhancement in decoding efficiency. Notably, LoPA increases the TPF of D2F-Dream to 10.1 on the GSM8K while maintaining performance superior to the Dream baseline. Furthermore, to facilitate this unprecedented degree of parallelism, we develop a specialized multi-device inference system featuring Branch Parallelism (BP), which achieves a single-sample throughput of 1073.9 tokens per second under multi-GPU deployment. The code is available at https://github.com/zhijie-group/LoPA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing parallelism in dLLM inference
Optimizing token filling order for efficiency
Increasing tokens per forward pass speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lookahead Parallel Decoding for improved token generation order
Training-free algorithm exploring candidate orders via parallel branches
Multi-device inference system with Branch Parallelism for high throughput
🔎 Similar Papers
No similar papers found.