CAT: Causal Attention Tuning For Injecting Fine-grained Causal Knowledge into Large Language Models

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are prone to capturing spurious correlations and struggle to model true causal relationships, resulting in poor out-of-distribution (OOD) generalization. To address this, we propose Causal Attention Tuning (CAT), the first framework to explicitly inject fine-grained, token-level causal knowledge into the Transformer attention mechanism. CAT constructs a causal signal generation pipeline grounded in human priors and introduces a Re-Attention mechanism that dynamically recalibrates attention weights to suppress noise and bias. Crucially, CAT requires no architectural modifications or large-scale causal annotations—only lightweight attention tuning enables causal-structure-aware training. Evaluated on our novel spatiotemporal causal benchmark STG and multiple downstream tasks, CAT consistently improves OOD robustness and generation consistency. Empirical results demonstrate that explicit causal attention injection effectively enhances LLMs’ causal reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across various domains. However, a fundamental question remains: Can LLMs effectively utilize causal knowledge for prediction and generation? Through empirical studies, we find that LLMs trained directly on large-scale data often capture spurious correlations rather than true causal relationships, leading to suboptimal performance, especially in out-of-distribution (OOD) scenarios. To address this challenge, we propose Causal Attention Tuning (CAT), a novel approach that injects fine-grained causal knowledge into the attention mechanism. We propose an automated pipeline that leverages human priors to automatically generate token-level causal signals and introduce the Re-Attention mechanism to guide training, helping the model focus on causal structures while mitigating noise and biases in attention scores. Experimental results on our proposed Spurious Token Game (STG) benchmark and multiple downstream tasks demonstrate that our approach effectively leverages causal knowledge for prediction and remains robust in OOD scenarios. Implementation details can be found at https://github.com/Kairong-Han/CAT.
Problem

Research questions and friction points this paper is trying to address.

LLMs capture spurious correlations not true causal relationships
Models show suboptimal performance in out-of-distribution scenarios
Need to inject fine-grained causal knowledge into attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Attention Tuning injects fine-grained knowledge
Automated pipeline generates token-level causal signals
Re-Attention mechanism guides training focus
🔎 Similar Papers
No similar papers found.
K
Kairong Han
College of Computer Science and Technology, Zhejiang University
W
Wenshuo Zhao
College of Computer Science and Technology, Zhejiang University
Ziyu Zhao
Ziyu Zhao
University of South Carolina
computer vision. 2D/3D segmentationGenerative 3D reconstruction
J
JunJian Ye
Noah’s Ark Lab, Huawei Technologies
Lujia Pan
Lujia Pan
Noah's Ark Lab, Huawei
Anomaly dectionTime seriesRepresentation learning
Kun Kuang
Kun Kuang
Zhejiang University
Causal InferenceData MiningMachine Learning