🤖 AI Summary
Existing graph-invariant learning methods (e.g., IRM, VREx) lack class-conditional invariance constraints, rendering them prone to spurious feature reliance and failing at node-level out-of-distribution (OOD) generalization. This work first establishes a graph structural causal model to theoretically characterize their failure mechanism. We propose CIA-LRA—a novel node-level OOD generalization framework that requires no environment labels—leveraging neighborhood label-guided local reweighting and alignment to extract invariant representations. We derive a PAC-Bayesian generalization error bound for our method, providing theoretical guarantees. Extensive experiments on multiple graph OOD benchmarks demonstrate significant improvements over state-of-the-art approaches. Our code is publicly released, and robustness is further validated on real-world heterogeneous graphs.
📝 Abstract
Enhancing node-level Out-Of-Distribution (OOD) generalization on graphs remains a crucial area of research. In this paper, we develop a Structural Causal Model (SCM) to theoretically dissect the performance of two prominent invariant learning methods -- Invariant Risk Minimization (IRM) and Variance-Risk Extrapolation (VREx) -- in node-level OOD settings. Our analysis reveals a critical limitation: due to the lack of class-conditional invariance constraints, these methods may struggle to accurately identify the structure of the predictive invariant ego-graph and consequently rely on spurious features. To address this, we propose Cross-environment Intra-class Alignment (CIA), which explicitly eliminates spurious features by aligning cross-environment representations conditioned on the same class, bypassing the need for explicit knowledge of the causal pattern structure. To adapt CIA to node-level OOD scenarios where environment labels are hard to obtain, we further propose CIA-LRA (Localized Reweighting Alignment) that leverages the distribution of neighboring labels to selectively align node representations, effectively distinguishing and preserving invariant features while removing spurious ones, all without relying on environment labels. We theoretically prove CIA-LRA's effectiveness by deriving an OOD generalization error bound based on PAC-Bayesian analysis. Experiments on graph OOD benchmarks validate the superiority of CIA and CIA-LRA, marking a significant advancement in node-level OOD generalization. The codes are available at https://github.com/NOVAglow646/NeurIPS24-Invariant-Learning-on-Graphs.