🤖 AI Summary
This work addresses the poor out-of-distribution (OOD) generalization of graph neural networks (GNNs) in graph regression tasks. We present the first systematic extension of causal graph learning (CGL) from classification to regression settings. Our method introduces a contrastive-learning-inspired causal intervention framework that explicitly models confounding effects in regression prediction: it identifies causal subgraphs, disentangles confounding factors, and performs graph-level causal interventions. Integrating causal discovery, contrastive representation learning, and GNN-based modeling, the approach achieves significant OOD generalization improvements across multiple graph regression benchmarks—reducing mean absolute error (MAE) by 12.7%–23.4% on average. The code is publicly available.
📝 Abstract
Through recognizing causal subgraphs, causal graph learning (CGL) has risen to be a promising approach for improving the generalizability of graph neural networks under out-of-distribution (OOD) scenarios. However, the empirical successes of CGL techniques are mostly exemplified in classification settings, while regression tasks, a more challenging setting in graph learning, are overlooked. We thus devote this work to tackling causal graph regression (CGR); to this end we reshape the processing of confounding effects in existing CGL studies, which mainly deal with classification. Specifically, we reflect on the predictive power of confounders in graph-level regression, and generalize classification-specific causal intervention techniques to regression through a lens of contrastive learning. Extensive experiments on graph OOD benchmarks validate the efficacy of our proposals for CGR. The model implementation and the code are provided on https://github.com/causal-graph/CGR.