π€ AI Summary
Existing discrete diffusion language models (e.g., MDLMs, USDMs) suffer from severe performance degradation under few-step generation and are incompatible with continuous-domain few-step distillation techniques. Although USDMs partially alleviate these limitations, their complex loss design hinders scalability. This paper proposes a simplified denoising training framework: it optimizes only over tokens replaced by noise, reformulates denoising as self-supervised learning, and introduces a contrastive-learning-inspired negative gradient mechanism to enhance discriminability. The approach significantly improves training stability and few-step generation quality, achieving a new state-of-the-art ELBO score. It outperforms existing discrete diffusion language models across multiple metrics, establishing a novel paradigm for efficient and scalable language generation.
π Abstract
Diffusion models have recently been extended to language generation through Masked Diffusion Language Models (MDLMs), which achieve performance competitive with strong autoregressive models. However, MDLMs tend to degrade in the few-step regime and cannot directly adopt existing few-step distillation methods designed for continuous diffusion models, as they lack the intrinsic property of mapping from noise to data. Recent Uniform-state Diffusion Models (USDMs), initialized from a uniform prior, alleviate some limitations but still suffer from complex loss formulations that hinder scalability. In this work, we propose a simplified denoising-based loss for USDMs that optimizes only noise-replaced tokens, stabilizing training and matching ELBO-level performance. Furthermore, by framing denoising as self-supervised learning, we introduce a simple modification to our denoising loss with contrastive-inspired negative gradients, which is practical and yield additional improvements in generation quality.