Masks Can Be Distracting: On Context Comprehension in Diffusion Language Models

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Masked Diffusion Language Models (MDLMs) suffer from two critical context-understanding bottlenecks: (1) locality bias—despite bidirectional attention and a global objective, they over-rely on local context and struggle to capture long-range dependencies; and (2) mask interference—excessive masked tokens dilute attention weights, impairing focus on genuine semantic content. To address these, we propose a mask-agnostic loss function that renders model predictions robust to variations in mask ratio, thereby decoupling mask injection from semantic modeling at the training objective level. Through systematic ablation studies and diffusion mechanism analysis, we demonstrate that our approach significantly enhances MDLMs’ stability and generalization across multi-scale contextual tasks. It yields consistent improvements on long-range reasoning and cross-sentence coreference resolution. Our work establishes a novel paradigm for improving contextual robustness in diffusion-based language modeling.

Technology Category

Application Category

📝 Abstract
Masked Diffusion Language Models (MDLMs) have recently emerged as a promising alternative to Autoregressive Language Models (ARLMs), leveraging a denoising objective that, in principle, should enable more uniform context utilisation. In this work, we examine the context comprehension abilities of MDLMs and uncover two key limitations. First, despite their more global training objective and bidirectional attention mechanism, similarly to ARLMS, MDLMs exhibit a strong locality bias: performance is highly sensitive to the position of relevant information within the input, favouring local over distant context. Second, we show that appending a large number of mask tokens--required for generation--can significantly degrade context comprehension. Through systematic ablations, we find that these masks act as distractors, reducing the model's ability to process relevant information. To address this, we introduce a mask-agnostic loss function that encourages predictions to remain invariant to the number of appended masks. Fine-tuning with this objective substantially mitigates the distracting effect of masks, improving robustness of MDLMs. Overall, our findings reveal critical limitations of the current MDLM training paradigm and provide actionable insights for building diffusion-based language models with stronger context comprehension.
Problem

Research questions and friction points this paper is trying to address.

MDLMs exhibit strong locality bias favoring local over distant context
Appended mask tokens degrade context comprehension by acting as distractors
Current MDLM training paradigm lacks robustness in context utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mask-agnostic loss function for robustness
Fine-tuning to mitigate mask distraction effects
Improving context comprehension in diffusion models
🔎 Similar Papers
No similar papers found.