🤖 AI Summary
Existing autoregressive and diffusion language models (DLMs) struggle to capture the complex causal structures inherent in natural language, resulting in inconsistent logical reasoning and vulnerability to causal inversion. To address this, we propose Causal Concept–guided Diffusion Language Models (CC-DLMs), the first DLM framework to explicitly incorporate concept-level causal graphs. CC-DLMs automatically construct causal graphs using a teacher model and introduce a causal-guided attention mechanism that dynamically enforces causal plausibility during the denoising process. This explicit integration avoids the ambiguity of implicit causal modeling and substantially improves reasoning consistency. Experiments demonstrate that CC-DLM achieves a 12% accuracy gain on the COT-OrderPerturb task, accelerates training by 3.2×, and yields an average 1.31% improvement across six downstream reasoning benchmarks—validating that explicit causal structure modeling effectively enhances language models’ reasoning capabilities.
📝 Abstract
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a underline{ extbf{C}}ausal underline{ extbf{C}}oncept-Guided underline{ extbf{D}}iffusion underline{ extbf{L}}anguage underline{ extbf{M}}odel (C$^2$DLM). Starting from DLM's fully connected attention, C$^2$DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C$^2$DLM improves 12% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31% across six downstream reasoning tasks. More details in the repository ~href{https://github.com/Kairong-Han/C-2-DLM}{here}.