🤖 AI Summary
This work addresses the inefficiency of parallel decoding in diffusion language models, which stems from the mismatch between static scheduling during training and dynamic generation during inference. To overcome this limitation, the authors propose Discrete Space Consistency Distillation (DSCD), which renders the model invariant to denoising trajectories, and introduce a Confidence-Adaptive Decoding (CAD) strategy that dynamically allocates computational resources. This approach breaks free from the fixed scheduling constraints of conventional diffusion models, achieving significantly accelerated parallel inference while maintaining or even improving generation quality. Empirical results demonstrate that the method matches baseline accuracy on GSM8K with a 5.18× speedup, and achieves an average 3.62× acceleration across code and mathematical reasoning tasks with higher accuracy, consistently outperforming existing approaches.
📝 Abstract
Autoregressive large language models achieve strong results on many benchmarks, but decoding remains fundamentally latency-limited by sequential dependence on previously generated tokens. Diffusion language models (DLMs) promise parallel generation but suffer from a fundamental static-to-dynamic misalignment: Training optimizes local transitions under fixed schedules, whereas efficient inference requires adaptive"long-jump"refinements through unseen states. Our goal is to enable highly parallel decoding for DLMs with low number of function evaluations while preserving generation quality. To achieve this, we propose CD4LM, a framework that decouples training from inference via Discrete-Space Consistency Distillation (DSCD) and Confidence-Adaptive Decoding (CAD). Unlike standard objectives, DSCD trains a student to be trajectory-invariant, mapping diverse noisy states directly to the clean distribution. This intrinsic robustness enables CAD to dynamically allocate compute resources based on token confidence, aggressively skipping steps without the quality collapse typical of heuristic acceleration. On GSM8K, CD4LM matches the LLaDA baseline with a 5.18x wall-clock speedup; across code and math benchmarks, it strictly dominates the accuracy-efficiency Pareto frontier, achieving a 3.62x mean speedup while improving average accuracy. Code is available at https://github.com/yihao-liang/CDLM