CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of continuous diffusion language models caused by the hard rounding operation that maps final embeddings to discrete tokens. To overcome this bottleneck, the authors propose CoDAR, a framework that performs diffusion-based denoising entirely in a continuous embedding space and introduces a context-aware autoregressive Transformer decoder. This decoder leverages cross-attention mechanisms to enable conditioned, precise token mapping without resorting to hard rounding. Experimental results demonstrate that CoDAR significantly outperforms latent diffusion models on LM1B and OpenWebText, achieving generation quality on par with strong discrete diffusion approaches. Moreover, the method allows flexible control over the trade-off between text fluency and diversity through the decoding temperature.

Technology Category

Application Category

📝 Abstract
We study why continuous diffusion language models (DLMs) have lagged behind discrete diffusion approaches despite their appealing continuous generative dynamics. Under a controlled token--recovery study, we identify token rounding, the final projection from denoised embeddings to tokens, as a primary bottleneck. Building on these insights, we propose CoDAR (Continuous Diffusion with Contextual AutoRegressive Decoder), a two--stage framework that keeps diffusion entirely continuous in an embedding space while learning a strong, context--conditional discretizer: an autoregressive Transformer decoder that cross--attends to the denoised embedding sequence and performs contextualized rounding to tokens. Experiments on LM1B and OpenWebText demonstrate that CoDAR substantially improves generation quality over latent diffusion and becomes competitive with strong discrete DLMs, while exposing a simple decoder--temperature knob to navigate the fluency--diversity trade off.
Problem

Research questions and friction points this paper is trying to address.

continuous diffusion
language models
token rounding
generative dynamics
discrete diffusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous diffusion
contextual autoregressive decoder
token rounding
embedding space
fluency-diversity trade-off
🔎 Similar Papers
No similar papers found.
J
Junzhe Shen
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University
Jieru Zhao
Jieru Zhao
Associate Professor, Shanghai Jiao Tong University
Hardware-software co-designAI acceleration and systemCompilerFPGAHigh-level synthesis
Ziwei He
Ziwei He
Shanghai Jiao Tong University
Machine Learning
Z
Zhouhan Lin
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University; Shanghai AI Laboratory