π€ AI Summary
This work addresses the sensitivity of masked diffusion language models to token misalignment in open-ended text generation, which stems from their strict positional alignment assumption and often degrades output quality. To mitigate this issue, the authors propose an alignment-flexible fine-tuning strategy that introduces a special <slack> token into the Connectionist Temporal Classification (CTC) objective. This modification relaxes positional supervision during training, thereby alleviating the mismatch between training dynamics and decoding behavior. The proposed approach significantly enhances the modelβs robustness to positional shifts and consistently outperforms the original model across five open-ended text generation benchmarks, yielding notable improvements in both generation quality and stability.
π Abstract
Masked diffusion language models (MDLMs) have emerged as a promising alternative to dominant autoregressive approaches. Although they achieve competitive performance on several tasks, a substantial gap remains in open-ended text generation. We hypothesize that one cause of this gap is that strict positional prediction makes MDLM decoding highly sensitive to token misalignment, and we show through controlled interventions that a one-position shift can severely disrupt semantics. This observation suggests that enforcing strict positional supervision during training is misaligned with the irreversible denoising dynamics of MDLM decoding. Motivated by this mismatch, we adopt an alignment-flexible supervision strategy during fine-tuning. Specifically, we introduce a special tokenvia the connectionist temporal classification objective. We apply this approach to the widely used MDLM model and conduct experiments on five open-ended text generation benchmarks. Our method consistently outperforms the original model and improves robustness to positional shifts, indicating that relaxing strict positional supervision is an important factor in improving generation quality in MDLMs.