🤖 AI Summary
This work uncovers the distinctive dynamics of attention mechanisms in masked diffusion models, revealing a fundamental divergence from autoregressive models. During denoising, shallow layers employ a phenomenon termed “attention floating” to leverage dispersed tokens for constructing global structure, while deeper layers concentrate on semantic content, establishing a two-stage paradigm of “shallow structural awareness and deep content focus.” Through attention visualization, ablation studies, and evaluation on knowledge-intensive tasks, this study provides the first systematic account of how this mechanism underpins strong in-context learning capabilities. Experiments demonstrate that masked diffusion models achieve up to twice the performance of autoregressive counterparts on such tasks, substantiating the critical role of the proposed attention floating mechanism.
📝 Abstract
Masked diffusion models (MDMs), which leverage bidirectional attention and a denoising process, are narrowing the performance gap with autoregressive models (ARMs). However, their internal attention mechanisms remain under-explored. This paper investigates the attention behaviors in MDMs, revealing the phenomenon of Attention Floating. Unlike ARMs, where attention converges to a fixed sink, MDMs exhibit dynamic, dispersed attention anchors that shift across denoising steps and layers. Further analysis reveals its Shallow Structure-Aware, Deep Content-Focused attention mechanism: shallow layers utilize floating tokens to build a global structural framework, while deeper layers allocate more capability toward capturing semantic content. Empirically, this distinctive attention pattern provides a mechanistic explanation for the strong in-context learning capabilities of MDMs, allowing them to double the performance compared to ARMs in knowledge-intensive tasks. All codes and datasets are available at https://github.com/NEUIR/Attention-Floating.