🤖 AI Summary
This work addresses the “reversal curse” in diffusion-based large language models (DLLMs)—a persistent unidirectional bias in modeling logically bidirectional entity relations. Through systematic analysis, we identify entity fragmentation, data asymmetry, and missing relational signals as primary causes. To mitigate this limitation, we propose DiffER, a novel approach that introduces entity-aware training and relation-enhanced data construction, thereby transcending the constraints of conventional autoregressive assumptions. Our method employs holistic entity masking and distribution-symmetric data augmentation to better align model learning with bidirectional relational semantics. Experimental results demonstrate that DiffER substantially alleviates the reversal curse, significantly improving DLLMs’ capacity for bidirectional relational reasoning across multiple benchmark tasks.
📝 Abstract
The"reversal curse"refers to the phenomenon where large language models (LLMs) exhibit predominantly unidirectional behavior when processing logically bidirectional relationships. Prior work attributed this to autoregressive training -- predicting the next token inherently favors left-to-right information flow over genuine bidirectional knowledge associations. However, we observe that Diffusion LLMs (DLLMs), despite being trained bidirectionally, also suffer from the reversal curse. To investigate the root causes, we conduct systematic experiments on DLLMs and identify three key reasons: 1) entity fragmentation during training, 2) data asymmetry, and 3) missing entity relations. Motivated by the analysis of these reasons, we propose Diffusion Entity-Relation Modeling (DiffER), which addresses the reversal curse through entity-aware training and balanced data construction. Specifically, DiffER introduces whole-entity masking, which mitigates entity fragmentation by predicting complete entities in a single step. DiffER further employs distribution-symmetric and relation-enhanced data construction strategies to alleviate data asymmetry and missing relations. Extensive experiments demonstrate that DiffER effectively alleviates the reversal curse in Diffusion LLMs, offering new perspectives for future research.