A Unified Invariant Learning Framework for Graph Classification

๐Ÿ“… 2025-01-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing graph classification methods exhibit limited out-of-distribution (OOD) generalization, primarily focusing on semantic invariance while neglecting the causal stability inherent in graph structure. Method: We propose a Unified Invariant Learning (UIL) frameworkโ€”the first to jointly model structural invariance (enforced via graph-on-distance constraints on subgraph feature stability) and semantic invariance (achieved through environment partitioning and cross-environment contrastive learning for robust representation). UIL incorporates a theory-driven stable feature discrimination criterion and cross-environment consistency regularization to provably converge to causally stable graph features. Contribution/Results: UIL achieves significant improvements over state-of-the-art methods across multiple OOD graph classification benchmarks. We provide theoretical analysis proving its convergence advantage under causal stability assumptions. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
Invariant learning demonstrates substantial potential for enhancing the generalization of graph neural networks (GNNs) with out-of-distribution (OOD) data. It aims to recognize stable features in graph data for classification, based on the premise that these features causally determine the target label, and their influence is invariant to changes in distribution. Along this line, most studies have attempted to pinpoint these stable features by emphasizing explicit substructures in the graph, such as masked or attentive subgraphs, and primarily enforcing the invariance principle in the semantic space, i.e., graph representations. However, we argue that focusing only on the semantic space may not accurately identify these stable features. To address this, we introduce the Unified Invariant Learning (UIL) framework for graph classification. It provides a unified perspective on invariant graph learning, emphasizing both structural and semantic invariance principles to identify more robust stable features. In the graph space, UIL adheres to the structural invariance principle by reducing the distance between graphons over a set of stable features across different environments. Simultaneously, to confirm semantic invariance, UIL underscores that the acquired graph representations should demonstrate exemplary performance across diverse environments. We present both theoretical and empirical evidence to confirm our method's ability to recognize superior stable features. Moreover, through a series of comprehensive experiments complemented by in-depth analyses, we demonstrate that UIL considerably enhances OOD generalization, surpassing the performance of leading baseline methods. Our codes are available at https://github.com/yongduosui/UIL.
Problem

Research questions and friction points this paper is trying to address.

Invariant Learning
Graph Neural Networks
Non-typical Data Classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Invariant Learning
Graph Neural Networks
Feature Stability and Accuracy
๐Ÿ”Ž Similar Papers
No similar papers found.
Yongduo Sui
Yongduo Sui
Tencent
LLMAgentGraph LearningRecommendation
J
Jie Sun
University of Science and Technology of China, Hefei, China
Shuyao Wang
Shuyao Wang
University of Tennessee
power electronicpower systemmicrogridrenewable energy
Zemin Liu
Zemin Liu
Zhejiang University
Graph LearningGraph Imbalanced Learning
Q
Qing Cui
Ant Group, Beijing, China
L
Longfei Li
Ant Group, Hangzhou, China
X
Xiang Wang
University of Science and Technology of China, Hefei, China