Transformer Is Inherently a Causal Learner

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of accurately discovering time-lagged causal structures from multivariate time series without explicit causal constraints. The work proposes leveraging the autoregressive Transformer’s inherent sensitivity—specifically, the gradient of its predictions with respect to historical inputs—as a natural encoder of causal relationships, from which a causal graph is extracted via gradient attribution aggregation. It is rigorously demonstrated for the first time that standard Transformers possess intrinsic causal learning capabilities without requiring additional causal objectives or architectural modifications. Notably, causal discovery accuracy improves significantly as data heterogeneity increases. The method substantially outperforms existing causal discovery algorithms under challenging conditions, including nonlinear dynamics, long-range dependencies, and nonstationarity, with particularly pronounced gains in highly heterogeneous datasets.

Technology Category

Application Category

📝 Abstract
We reveal that transformers trained in an autoregressive manner naturally encode time-delayed causal structures in their learned representations. When predicting future values in multivariate time series, the gradient sensitivities of transformer outputs with respect to past inputs directly recover the underlying causal graph, without any explicit causal objectives or structural constraints. We prove this connection theoretically under standard identifiability conditions and develop a practical extraction method using aggregated gradient attributions. On challenging cases such as nonlinear dynamics, long-term dependencies, and non-stationary systems, this approach greatly surpasses the performance of state-of-the-art discovery algorithms, especially as data heterogeneity increases, exhibiting scaling potential where causal accuracy improves with data volume and heterogeneity, a property traditional methods lack. This unifying view lays the groundwork for a future paradigm where causal discovery operates through the lens of foundation models, and foundation models gain interpretability and enhancement through the lens of causality.
Problem

Research questions and friction points this paper is trying to address.

causal discovery
transformer
time series
causal graph
autoregressive
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal discovery
transformer
gradient attribution
time series
foundation models
🔎 Similar Papers
No similar papers found.