๐ค AI Summary
Existing graph classification methods exhibit limited out-of-distribution (OOD) generalization, primarily focusing on semantic invariance while neglecting the causal stability inherent in graph structure.
Method: We propose a Unified Invariant Learning (UIL) frameworkโthe first to jointly model structural invariance (enforced via graph-on-distance constraints on subgraph feature stability) and semantic invariance (achieved through environment partitioning and cross-environment contrastive learning for robust representation). UIL incorporates a theory-driven stable feature discrimination criterion and cross-environment consistency regularization to provably converge to causally stable graph features.
Contribution/Results: UIL achieves significant improvements over state-of-the-art methods across multiple OOD graph classification benchmarks. We provide theoretical analysis proving its convergence advantage under causal stability assumptions. The implementation is publicly available.
๐ Abstract
Invariant learning demonstrates substantial potential for enhancing the generalization of graph neural networks (GNNs) with out-of-distribution (OOD) data. It aims to recognize stable features in graph data for classification, based on the premise that these features causally determine the target label, and their influence is invariant to changes in distribution. Along this line, most studies have attempted to pinpoint these stable features by emphasizing explicit substructures in the graph, such as masked or attentive subgraphs, and primarily enforcing the invariance principle in the semantic space, i.e., graph representations. However, we argue that focusing only on the semantic space may not accurately identify these stable features. To address this, we introduce the Unified Invariant Learning (UIL) framework for graph classification. It provides a unified perspective on invariant graph learning, emphasizing both structural and semantic invariance principles to identify more robust stable features. In the graph space, UIL adheres to the structural invariance principle by reducing the distance between graphons over a set of stable features across different environments. Simultaneously, to confirm semantic invariance, UIL underscores that the acquired graph representations should demonstrate exemplary performance across diverse environments. We present both theoretical and empirical evidence to confirm our method's ability to recognize superior stable features. Moreover, through a series of comprehensive experiments complemented by in-depth analyses, we demonstrate that UIL considerably enhances OOD generalization, surpassing the performance of leading baseline methods. Our codes are available at https://github.com/yongduosui/UIL.