🤖 AI Summary
This work addresses the performance limitations of continuous diffusion language models caused by the hard rounding operation that maps final embeddings to discrete tokens. To overcome this bottleneck, the authors propose CoDAR, a framework that performs diffusion-based denoising entirely in a continuous embedding space and introduces a context-aware autoregressive Transformer decoder. This decoder leverages cross-attention mechanisms to enable conditioned, precise token mapping without resorting to hard rounding. Experimental results demonstrate that CoDAR significantly outperforms latent diffusion models on LM1B and OpenWebText, achieving generation quality on par with strong discrete diffusion approaches. Moreover, the method allows flexible control over the trade-off between text fluency and diversity through the decoding temperature.
📝 Abstract
We study why continuous diffusion language models (DLMs) have lagged behind discrete diffusion approaches despite their appealing continuous generative dynamics. Under a controlled token--recovery study, we identify token rounding, the final projection from denoised embeddings to tokens, as a primary bottleneck. Building on these insights, we propose CoDAR (Continuous Diffusion with Contextual AutoRegressive Decoder), a two--stage framework that keeps diffusion entirely continuous in an embedding space while learning a strong, context--conditional discretizer: an autoregressive Transformer decoder that cross--attends to the denoised embedding sequence and performs contextualized rounding to tokens. Experiments on LM1B and OpenWebText demonstrate that CoDAR substantially improves generation quality over latent diffusion and becomes competitive with strong discrete DLMs, while exposing a simple decoder--temperature knob to navigate the fluency--diversity trade off.