CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of graph neural networks to spurious correlations in out-of-distribution (OOD) scenarios, which destabilizes the mutual information between predictions and labels. To mitigate this issue, the authors formulate a causal framework for node classification by constructing a causal graph and applying backdoor adjustment to block non-causal pathways. They further propose a causally guided graph representation learning framework that integrates causal representation learning with a same-order asymptotic loss replacement strategy to enhance causal invariance. Theoretical analysis derives a lower bound on the OOD generalization error, and extensive experiments demonstrate that the proposed method significantly outperforms existing baselines across multiple OOD graph datasets, effectively improving model generalization.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have achieved impressive performance in graph-related tasks. However, they suffer from poor generalization on out-of-distribution (OOD) data, as they tend to learn spurious correlations. Such correlations present a phenomenon that GNNs fail to stably learn the mutual information between prediction representations and ground-truth labels under OOD settings. To address these challenges, we formulate a causal graph starting from the essence of node classification, adopt backdoor adjustment to block non-causal paths, and theoretically derive a lower bound for improving OOD generalization of GNNs. To materialize these insights, we further propose a novel approach integrating causal representation learning and a loss replacement strategy. The former captures node-level causal invariance and reconstructs graph posterior distribution. The latter introduces asymptotic losses of the same order to replace the original losses. Extensive experiments demonstrate the superiority of our method in OOD generalization and effectively alleviating the phenomenon of unstable mutual information learning.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution generalization
spurious correlations
graph neural networks
mutual information instability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Representation Learning
Out-of-Distribution Generalization
Graph Neural Networks
Backdoor Adjustment
Mutual Information Stability
🔎 Similar Papers
No similar papers found.
B
Bowen Lu
School of Artificial Intelligence, Anhui University, Hefei, China
L
Liangqiang Yang
School of Artificial Intelligence, Anhui University, Hefei, China
Teng Li
Teng Li
Anhui University
computer visionmultimediapattern recognition