From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive (AR) language models suffer from low generation throughput, while diffusion language models (DLMs) struggle to efficiently leverage pretrained AR knowledge. Method: We propose the first systematic AR-to-block-diffusion adaptation framework. Key innovations include: (1) a context-aware causal attention mask that explicitly reconciles the fundamental conflict between AR’s unidirectional constraint and bidirectional intra-block inference; and (2) a progressive block-size growth strategy ensuring training–inference consistency. Our approach integrates masked block diffusion, parallel adaptation training, auxiliary AR loss, and the proposed attention mechanism—enabling efficient fine-tuning of AR checkpoints without full retraining. Results: We instantiate the framework as NBDiff-7B, achieving state-of-the-art performance among 7B-scale DLMs across general knowledge, mathematical reasoning, and code generation benchmarks—demonstrating superior efficacy and scalability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at generation but dominant autoregressive (AR) decoding is inherently sequential, creating a throughput bottleneck. Diffusion Language Models (DLMs)--especially block-wise variants--enable parallel generation and intra-block bidirectional reasoning, yet training large DLMs from scratch is costly and wastes the knowledge in mature AR checkpoints. Prior "adaptation" attempts either modify logits or randomly grow attention masks to full-sequence diffusion, or simply transplant AR weights into a block-diffusion recipe, leaving a fundamental mismatch between AR causality and block-wise bidirectionality unaddressed. We reframe adaptation as a intra-paradigm path from AR to Block-Diffusion by viewing AR as Block-Diffusion with blocksize=1. Concretely, we design the pathway of adaptation as follows: we use a context-causal attention mask (causal in context, bidirectional only within the active block), an efficient parallel adaptation procedure, an auxiliary AR loss to maximize data utilization and retain pretrained knowledge, and gradual increment of the generation block size. The recipe integrates cleanly with masked block-diffusion and maintains train-inference consistency. Built on these components, NBDiff-7B (Base and Instruct) could inherit the long-context modeling and reasoning capabilities, and achieve state-of-the-art performance among the 7B-class DLMs, delivering strong gains on general-knowledge, math, and code benchmarks over strong baselines. These results demonstrate that principled AR-to-block-diffusion adaptation is an effective and compute-efficient alternative to training DLMs from scratch. Codes: https://github.com/YuchuanTian/NBDiff.
Problem

Research questions and friction points this paper is trying to address.

Autoregressive LLMs have sequential bottlenecks limiting generation throughput.
Training diffusion language models from scratch wastes existing AR model knowledge.
Prior adaptation methods fail to reconcile AR causality with block-wise bidirectionality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts AR to block-diffusion via context-causal attention masks
Uses auxiliary AR loss and gradual block size increase
Enables parallel generation while retaining pretrained knowledge
🔎 Similar Papers
No similar papers found.
Y
Yuchuan Tian
Peking University & Huawei Technologies
Yuchen Liang
Yuchen Liang
The Ohio State University
Diffusion modelsGenerative modelsAnomaly detectionBayesian analysisSignal processing
J
Jiacheng Sun
Peking University & Huawei Technologies
S
Shuo Zhang
Peking University & Huawei Technologies
Guangwen Yang
Guangwen Yang
Professor of Computer Science and Technology, Tsinghua University
Y
Yingte Shu
Peking University & Huawei Technologies
S
Sibo Fang
Peking University & Huawei Technologies
T
Tianyu Guo
Peking University & Huawei Technologies
K
Kai Han
Peking University & Huawei Technologies
C
Chao Xu
Peking University & Huawei Technologies
Hanting Chen
Hanting Chen
Noah's Ark Lab, Huawei
deep learningmachine learningcomputer vision
X
Xinghao Chen
Peking University & Huawei Technologies
Yunhe Wang
Yunhe Wang
Noah's Ark Lab, Huawei Technologies
Deep LearningLanguage ModelMachine LearningComputer Vision