Understanding the Reversal Curse Mitigation in Masked Diffusion Models through Attention and Training Dynamics

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the "reversal curse"—the difficulty of autoregressive language models in generalizing from learning “A is B” to inferring “B is A”—and demonstrates that masked diffusion language models substantially mitigate this issue, though the underlying mechanism remains unclear. Through theoretical analysis and empirical experiments, the authors identify that the key lies in the interplay between weight sharing in the Transformer encoder architecture and training dynamics, which jointly induce positive correlation between forward and reverse attention scores and align their gradients. Combining single-layer encoder modeling, controlled synthetic tasks, and large-scale diffusion model experiments, the study reveals that this synergy between architectural design and training dynamics—not merely the any-order training objective—drives improved generalization in symmetric relational reasoning, offering a novel perspective on how language models acquire such capabilities.

Technology Category

Application Category

📝 Abstract
Autoregressive language models (ARMs) suffer from the reversal curse: after learning that"$A$ is $B$", they often fail on the reverse query"$B$ is $A$". Masked diffusion-based language models (MDMs) exhibit this failure in a much weaker form, but the underlying reason has remained unclear. A common explanation attributes this mitigation to the any-order training objective. However, observing"[MASK] is $B$"during training does not necessarily teach the model to handle the reverse prompt"$B$ is [MASK]". We show that the mitigation arises from architectural structure and its interaction with training. In a one-layer Transformer encoder, weight sharing couples the two directions by making forward and reverse attention scores positively correlated. In the same setting, we further show that the corresponding gradients are aligned, so minimizing the forward loss also reduces the reverse loss. Experiments on both controlled toy tasks and large-scale diffusion language models support these mechanisms, explaining why MDMs partially overcome a failure mode that persists in strong ARMs.
Problem

Research questions and friction points this paper is trying to address.

reversal curse
masked diffusion models
autoregressive language models
attention mechanism
training dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

reversal curse
masked diffusion models
attention mechanism
weight sharing
training dynamics
🔎 Similar Papers
S
Sangwoo Shin
Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
B
BumJun Kim
Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
K
Kyelim Lee
Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
M
Moongyu Jeon
Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
Albert No
Albert No
Associate Professor of Department of Artificial Intelligence at Yonsei University
Learning TheoryInformation TheorySource CodingProbability Theory