🤖 AI Summary
This work addresses the limitations of masked diffusion models in language generation—specifically, their weak modeling of token dependencies and semantic incoherence stemming from reliance on discrete marginal distributions—by relocating the diffusion process to a continuous sentence semantic space. The authors propose a novel autoencoder architecture based on continuous latent representations, enabling efficient non-autoregressive generation through joint training of an encoder and a demasking decoder. They further introduce a unified fine-tuning framework and, for the first time, present two unconditional text synthesis algorithms, ConThenDisc and ConWithinDisc, which transcend the constraints of traditional discrete diffusion. Experimental results demonstrate significant improvements in generation quality on the LLaDA benchmark, along with over a tenfold acceleration in unconditional sampling speed.
📝 Abstract
Masked Diffusion Models (MDMs) provide an efficient non-causal alternative to autoregressive generation but often struggle with token dependencies and semantic incoherence due to their reliance on discrete marginal distributions. We address these limitations by shifting the diffusion process into a continuous sentence-level semantic space. We propose CRoCoDiL (Continuous and Robust Conditioned Diffusion for Language), a unified fine-tuning approach that jointly trains an encoder-demasker architecture, grounding the MDM demasking in continuous latent representations. This leads to the formation of a novel autoencoder in which decoding is obtained by an MDM algorithm. Relying on the same framework, we introduce two unconditional text synthesis algorithms: Continuous-Then-Discrete (ConThenDisc), a hybrid-diffusion approach that first generates latent representations in continuous space and then decodes these to tokens via an MDM, and Continuous-Within-Discrete (ConWithinDisc), a multi-diffusion strategy that refines latent representations throughout the discrete sampling process. Experiments using LLaDA show that our methods achieve superior generation quality and more than 10x faster sampling speeds in an unconditional setting.