🤖 AI Summary
This work addresses the practical limitation of diffusion language models, which often degenerate into autoregressive-like decoding and struggle to achieve efficient parallel generation. To overcome this, the authors propose NAP (Non-Autoregressive Parallelization), a method that constructs multiple independent reasoning trajectories as training data and integrates a forced parallel denoising sampling strategy. By co-designing data structure and decoding mechanisms, NAP explicitly aligns the training objective with non-autoregressive generation. Notably, it introduces multi-trajectory supervision signals for the first time, effectively mitigating the model’s inherent autoregressive bias. Experiments on mathematical reasoning benchmarks demonstrate that NAP significantly outperforms models trained on standard chain-of-thought data under high parallelism, with performance consistently improving as parallelism increases.
📝 Abstract
Diffusion Language Models (DLMs) are often advertised as enabling parallel token generation, yet practical fast DLMs frequently converge to left-to-right, autoregressive (AR)-like decoding dynamics. In contrast, genuinely non-AR generation is promising because it removes AR's sequential bottleneck, better exploiting parallel hardware to reduce synchronization/communication overhead and improve latency scaling with output length. We argue that a primary driver of AR-like decoding is a mismatch between DLM objectives and the highly sequential structure of widely used training data, including standard pretraining corpora and long chain-of-thought (CoT) supervision. Motivated by this diagnosis, we propose NAP (Non-Autoregressive Parallel DLMs), a proof-of-concept, data-centric approach that better aligns supervision with non-AR parallel decoding. NAP curates examples as multiple independent reasoning trajectories and couples them with a parallel-forced decoding strategy that encourages multi-token parallel updates. Across math reasoning benchmarks, NAP yields stronger performance under parallel decoding than DLMs trained on standard long CoT data, with gains growing as parallelism increases. Our results suggest that revisiting data and supervision is a principled direction for mitigating AR-like behavior and moving toward genuinely non-autoregressive parallel generation in DLMs. Our code is available at https://github.com/pixeli99/NAP.