🤖 AI Summary
This work addresses the vulnerability of graph neural networks to spurious correlations in out-of-distribution (OOD) scenarios, which destabilizes the mutual information between predictions and labels. To mitigate this issue, the authors formulate a causal framework for node classification by constructing a causal graph and applying backdoor adjustment to block non-causal pathways. They further propose a causally guided graph representation learning framework that integrates causal representation learning with a same-order asymptotic loss replacement strategy to enhance causal invariance. Theoretical analysis derives a lower bound on the OOD generalization error, and extensive experiments demonstrate that the proposed method significantly outperforms existing baselines across multiple OOD graph datasets, effectively improving model generalization.
📝 Abstract
Graph Neural Networks (GNNs) have achieved impressive performance in graph-related tasks. However, they suffer from poor generalization on out-of-distribution (OOD) data, as they tend to learn spurious correlations. Such correlations present a phenomenon that GNNs fail to stably learn the mutual information between prediction representations and ground-truth labels under OOD settings. To address these challenges, we formulate a causal graph starting from the essence of node classification, adopt backdoor adjustment to block non-causal paths, and theoretically derive a lower bound for improving OOD generalization of GNNs. To materialize these insights, we further propose a novel approach integrating causal representation learning and a loss replacement strategy. The former captures node-level causal invariance and reconstructs graph posterior distribution. The latter introduces asymptotic losses of the same order to replace the original losses. Extensive experiments demonstrate the superiority of our method in OOD generalization and effectively alleviating the phenomenon of unstable mutual information learning.