🤖 AI Summary
Non-autoregressive generation often suffers from low sampling efficiency due to error accumulation and distributional shift. This work proposes the Denoising with Shared Latents (DSL) method, which trains a single SNR-invariant denoiser within a Diffusion Transformer to jointly handle partially observed text across continuous noise levels, integrating intermediate draft noise with a masked endpoint corruption mechanism. Requiring only modifications to model training—without complex sampling strategies—DSL substantially improves step efficiency, self-correction capability, and uncertainty calibration. On OpenWebText, DSL achieves a higher MAUVE score than MDLM+ReMDM using only about one-quarter of the denoiser calls, and under high computational budgets, its generation quality matches that of autoregressive models.
📝 Abstract
Non-autoregressive (NAR) generation reduces decoding latency by predicting many tokens in parallel, but iterative refinement often suffers from error accumulation and distribution shift under self-generated drafts. Masked diffusion language models (MDLMs) and their remasking samplers (e.g., ReMDM) can be viewed as modern NAR iterative refinement, where generation repeatedly revises a partially observed draft. In this work we show that \emph{training alone} can substantially improve the step-efficiency of MDLM/ReMDM sampling. We propose \textsc{DSL} (Discrete Stochastic Localization), which trains a single SNR-invariant denoiser across a continuum of corruption levels, bridging intermediate draft noise and mask-style endpoint corruption within one Diffusion Transformer. On OpenWebText, \textsc{DSL} fine-tuning yields large MAUVE gains at low step budgets, surpassing the MDLM+ReMDM baseline with \(\sim\)4$\times$ fewer denoiser evaluations, and matches autoregressive quality at high budgets. Analyses show improved self-correction and uncertainty calibration, making remasking markedly more compute-efficient.