Relaxing Positional Alignment in Masked Diffusion Language Models

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the sensitivity of masked diffusion language models to token misalignment in open-ended text generation, which stems from their strict positional alignment assumption and often degrades output quality. To mitigate this issue, the authors propose an alignment-flexible fine-tuning strategy that introduces a special <slack> token into the Connectionist Temporal Classification (CTC) objective. This modification relaxes positional supervision during training, thereby alleviating the mismatch between training dynamics and decoding behavior. The proposed approach significantly enhances the model’s robustness to positional shifts and consistently outperforms the original model across five open-ended text generation benchmarks, yielding notable improvements in both generation quality and stability.

Technology Category

Application Category

πŸ“ Abstract
Masked diffusion language models (MDLMs) have emerged as a promising alternative to dominant autoregressive approaches. Although they achieve competitive performance on several tasks, a substantial gap remains in open-ended text generation. We hypothesize that one cause of this gap is that strict positional prediction makes MDLM decoding highly sensitive to token misalignment, and we show through controlled interventions that a one-position shift can severely disrupt semantics. This observation suggests that enforcing strict positional supervision during training is misaligned with the irreversible denoising dynamics of MDLM decoding. Motivated by this mismatch, we adopt an alignment-flexible supervision strategy during fine-tuning. Specifically, we introduce a special tokenvia the connectionist temporal classification objective. We apply this approach to the widely used MDLM model and conduct experiments on five open-ended text generation benchmarks. Our method consistently outperforms the original model and improves robustness to positional shifts, indicating that relaxing strict positional supervision is an important factor in improving generation quality in MDLMs.
Problem

Research questions and friction points this paper is trying to address.

masked diffusion language models
positional alignment
open-ended text generation
token misalignment
strict positional supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

masked diffusion language models
positional alignment
connectionist temporal classification
open-ended text generation
robustness to token shift
πŸ”Ž Similar Papers
No similar papers found.