🤖 AI Summary
This work proposes EvoToken-DLM, a novel diffusion language model that overcomes the limitations of traditional approaches relying on hard masking and discrete token assignments, which struggle to revise early decoding decisions and discard valuable intermediate probabilistic information. By replacing hard masks with evolving soft token distributions, EvoToken-DLM enables a progressive and revisable decoding process that smoothly transitions from masked states to discrete outputs. The model introduces a soft token evolution mechanism coupled with continuous trajectory supervision, aligning the training objective with iterative probability updates. Experimental results demonstrate that EvoToken-DLM significantly outperforms existing diffusion and masked language models across multiple benchmarks, achieving superior generation quality and enhanced flexibility.
📝 Abstract
Diffusion Language Models (DLMs) offer a promising alternative for language modeling by enabling parallel decoding through iterative refinement. However, most DLMs rely on hard binary masking and discrete token assignments, which hinder the revision of early decisions and underutilize intermediate probabilistic representations. In this paper, we propose EvoToken-DLM, a novel diffusion-based language modeling approach that replaces hard binary masks with evolving soft token distributions. EvoToken-DLM enables a progressive transition from masked states to discrete outputs, supporting revisable decoding. To effectively support this evolution, we introduce continuous trajectory supervision, which aligns training objectives with iterative probabilistic updates. Extensive experiments across multiple benchmarks show that EvoToken-DLM consistently achieves superior performance, outperforming strong diffusion-based and masked DLM baselines. Project webpage: https://aim-uofa.github.io/EvoTokenDLM.