🤖 AI Summary
This work investigates the "reversal curse"—the difficulty of autoregressive language models in generalizing from learning “A is B” to inferring “B is A”—and demonstrates that masked diffusion language models substantially mitigate this issue, though the underlying mechanism remains unclear. Through theoretical analysis and empirical experiments, the authors identify that the key lies in the interplay between weight sharing in the Transformer encoder architecture and training dynamics, which jointly induce positive correlation between forward and reverse attention scores and align their gradients. Combining single-layer encoder modeling, controlled synthetic tasks, and large-scale diffusion model experiments, the study reveals that this synergy between architectural design and training dynamics—not merely the any-order training objective—drives improved generalization in symmetric relational reasoning, offering a novel perspective on how language models acquire such capabilities.
📝 Abstract
Autoregressive language models (ARMs) suffer from the reversal curse: after learning that"$A$ is $B$", they often fail on the reverse query"$B$ is $A$". Masked diffusion-based language models (MDMs) exhibit this failure in a much weaker form, but the underlying reason has remained unclear. A common explanation attributes this mitigation to the any-order training objective. However, observing"[MASK] is $B$"during training does not necessarily teach the model to handle the reverse prompt"$B$ is [MASK]". We show that the mitigation arises from architectural structure and its interaction with training. In a one-layer Transformer encoder, weight sharing couples the two directions by making forward and reverse attention scores positively correlated. In the same setting, we further show that the corresponding gradients are aligned, so minimizing the forward loss also reduces the reverse loss. Experiments on both controlled toy tasks and large-scale diffusion language models support these mechanisms, explaining why MDMs partially overcome a failure mode that persists in strong ARMs.