Sequential Diffusion Language Models

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) suffer from fixed-length decoding and incompatibility with KV caching; while block-wise diffusion alleviates this, it relies on rigid block sizes and incurs high training costs. To address these limitations, we propose the Next Sequence Prediction (NSP) framework, which unifies next-token and next-block prediction to enable adaptive control over generation length. Based on NSP, we introduce the Sequential Diffusion Language Model (SDLM), a lightweight fine-tuning approach for pretrained autoregressive models that supports dynamic-length sequence generation while fully preserving KV cache compatibility—marking the first DLM capable of variable-length prediction. SDLM incorporates dynamic masked block inference and confidence-driven contiguous subsequence decoding. Empirically, it matches or surpasses state-of-the-art autoregressive baselines using only 3.5M training samples, achieves 3.1× higher inference throughput than Qwen-2.5, and demonstrates strong scalability, as evidenced by the SDLM-32B variant.

Technology Category

Application Category

📝 Abstract
Diffusion language models (DLMs) have strong theoretical efficiency but are limited by fixed-length decoding and incompatibility with key-value (KV) caches. Block diffusion mitigates these issues, yet still enforces a fixed block size and requires expensive training. We introduce Next Sequence Prediction (NSP), which unifies next-token and next-block prediction, enabling the model to adaptively determine the generation length at each step. When the length is fixed to 1, NSP reduces to standard next-token prediction. Building on NSP, we propose Sequential Diffusion Language Model (SDLM), which can retrofit pre-trained autoregressive language models (ALMs) at minimal cost. Specifically, SDLM performs diffusion inference within fixed-size mask blocks, but dynamically decodes consecutive subsequences based on model confidence, thereby preserving KV-cache compatibility and improving robustness to varying uncertainty and semantics across the sequence. Experiments show that SDLM matches or surpasses strong autoregressive baselines using only 3.5M training samples, while achieving 2.1 higher throughput than Qwen-2.5. Notably, the SDLM-32B model delivers even more pronounced efficiency gains, demonstrating the strong scalability potential of our modeling paradigm. Project page and codes: https://github.com/OpenGVLab/SDLM
Problem

Research questions and friction points this paper is trying to address.

Overcoming fixed-length decoding limitations in diffusion language models
Enabling adaptive generation length while maintaining KV-cache compatibility
Retrofitting pre-trained autoregressive models with efficient diffusion inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies next-token and next-block prediction adaptively
Retrofits pre-trained autoregressive models with minimal cost
Dynamically decodes subsequences while preserving KV-cache compatibility
🔎 Similar Papers
No similar papers found.