C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive and diffusion language models (DLMs) struggle to capture the complex causal structures inherent in natural language, resulting in inconsistent logical reasoning and vulnerability to causal inversion. To address this, we propose Causal Concept–guided Diffusion Language Models (CC-DLMs), the first DLM framework to explicitly incorporate concept-level causal graphs. CC-DLMs automatically construct causal graphs using a teacher model and introduce a causal-guided attention mechanism that dynamically enforces causal plausibility during the denoising process. This explicit integration avoids the ambiguity of implicit causal modeling and substantially improves reasoning consistency. Experiments demonstrate that CC-DLM achieves a 12% accuracy gain on the COT-OrderPerturb task, accelerates training by 3.2×, and yields an average 1.31% improvement across six downstream reasoning benchmarks—validating that explicit causal structure modeling effectively enhances language models’ reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a underline{ extbf{C}}ausal underline{ extbf{C}}oncept-Guided underline{ extbf{D}}iffusion underline{ extbf{L}}anguage underline{ extbf{M}}odel (C$^2$DLM). Starting from DLM's fully connected attention, C$^2$DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C$^2$DLM improves 12% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31% across six downstream reasoning tasks. More details in the repository ~href{https://github.com/Kairong-Han/C-2-DLM}{here}.
Problem

Research questions and friction points this paper is trying to address.

AR and DLM models lack sufficient reasoning capabilities
Current models ignore flexible causal structures in language
Attention mechanisms disregard causal order in language processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal concept-guided attention mechanism for diffusion models
Concept-level causal graph from teacher model integration
Explicit causal relationship learning between concepts
🔎 Similar Papers
No similar papers found.
K
Kairong Han
College of Computer Science and Technology, Zhejiang University
N
Nuanqiao Shan
College of Computer Science and Technology, Zhejiang University
Ziyu Zhao
Ziyu Zhao
University of South Carolina
computer vision. 2D/3D segmentationGenerative 3D reconstruction
Z
Zijing Hu
College of Computer Science and Technology, Zhejiang University
X
Xinpeng Dong
College of Computer Science and Technology, Zhejiang University
J
Junjian Ye
Noah’s Ark Lab, Huawei Technologies
Lujia Pan
Lujia Pan
Noah's Ark Lab, Huawei
Anomaly dectionTime seriesRepresentation learning
F
Fei Wu
College of Computer Science and Technology, Zhejiang University
Kun Kuang
Kun Kuang
Zhejiang University
Causal InferenceData MiningMachine Learning