Causal Neighbourhood Learning for Invariant Graph Representations

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of graph neural networks (GNNs) under distribution shifts, which stems from noise and spurious correlations in graphs obscuring true causal relationships. To tackle this, the authors propose CNL-GNN, a novel framework that uniquely integrates structural-level causal intervention with feature disentanglement. By generating counterfactual neighborhoods, introducing learnable importance masks, and leveraging attention mechanisms, CNL-GNN adaptively perturbs edge connections to explicitly model causal neighborhoods and effectively separate causal features from confounding factors. Extensive experiments on four public cross-domain datasets demonstrate that the proposed method significantly outperforms existing approaches, validating its effectiveness in enhancing representation invariance, robustness, and generalization under distributional shifts.

Technology Category

Application Category

📝 Abstract
Graph data often contain noisy and spurious correlations that mask the true causal relationships, which are essential for enabling graph models to make predictions based on the underlying causal structure of the data. Dependence on spurious connections makes it challenging for traditional Graph Neural Networks (GNNs) to generalize effectively across different graphs. Furthermore, traditional aggregation methods tend to amplify these spurious patterns, limiting model robustness under distribution shifts. To address these issues, we propose Causal Neighbourhood Learning with Graph Neural Networks (CNL-GNN), a novel framework that performs causal interventions on graph structure. CNL-GNN effectively identifies and preserves causally relevant connections and reduces spurious influences through the generation of counterfactual neighbourhoods and adaptive edge perturbation guided by learnable importance masking and an attention-based mechanism. In addition, by combining structural-level interventions with the disentanglement of causal features from confounding factors, the model learns invariant node representations that are robust and generalize well across different graph structures. Our approach improves causal graph learning beyond traditional feature-based methods, resulting in a robust classification model. Extensive experiments on four publicly available datasets, including multiple domain variants of one dataset, demonstrate that CNL-GNN outperforms state-of-the-art GNN models.
Problem

Research questions and friction points this paper is trying to address.

causal relationships
spurious correlations
graph neural networks
distribution shifts
invariant representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Neighbourhood Learning
Counterfactual Neighbourhoods
Invariant Graph Representations
Edge Perturbation
Disentangled Causal Features
🔎 Similar Papers