Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization

📅 2024-08-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) suffer from poor out-of-distribution (OOD) generalization on graph data. Existing graph augmentation or causal intervention methods either compromise structural invariance or yield unreliable invariances due to the absence of causal supervision. To address this, we propose InfoIGL—the first invariant graph learning framework deeply grounded in information bottleneck theory. InfoIGL employs an information bottleneck constraint coupled with a redundancy filtering mechanism to automatically disentangle and suppress environment-specific confounders. It further incorporates multi-level contrastive learning—operating at node-, subgraph-, and graph-level—to unsupervisedly preserve class-consistent, distribution-invariant features across domains. Crucially, InfoIGL requires no causal labels and avoids explicit graph modifications that may distort topological structure. Evaluated on multiple synthetic and real-world OOD graph classification benchmarks, InfoIGL achieves state-of-the-art performance, demonstrating substantial improvements in robust generalization to unseen distributions.

Technology Category

Application Category

📝 Abstract
Graph out-of-distribution (OOD) generalization remains a major challenge in graph learning since graph neural networks (GNNs) often suffer from severe performance degradation under distribution shifts. Invariant learning, aiming to extract invariant features across varied distributions, has recently emerged as a promising approach for OOD generation. Despite the great success of invariant learning in OOD problems for Euclidean data (i.e., images), the exploration within graph data remains constrained by the complex nature of graphs. Existing studies, such as data augmentation or causal intervention, either suffer from disruptions to invariance during the graph manipulation process or face reliability issues due to a lack of supervised signals for causal parts. In this work, we propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL), to extract the invariant features of graphs and enhance models' generalization ability to unseen distributions. Specifically, InfoIGL introduces a redundancy filter to compress task-irrelevant information related to environmental factors. Cooperating with our designed multi-level contrastive learning, we maximize the mutual information among graphs of the same class in the downstream classification tasks, preserving invariant features for prediction to a great extent. An appealing feature of InfoIGL is its strong generalization ability without depending on supervised signal of invariance. Experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance under OOD generalization for graph classification tasks. The source code is available at https://github.com/maowenyu-11/InfoIGL.
Problem

Research questions and friction points this paper is trying to address.

Enhance graph neural networks' OOD generalization
Extract invariant features across graph distributions
Improve model performance without supervised invariance signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Invariant Graph Learning framework
Redundancy filter for information compression
Multi-level contrastive learning integration
🔎 Similar Papers
No similar papers found.
W
Wenyu Mao
School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230026, China
Jiancan Wu
Jiancan Wu
University of Science and Technology of China
LLMsRecommendationGraph Neural Network
H
Haoyang Liu
School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
Yongduo Sui
Yongduo Sui
Tencent
LLMAgentGraph LearningRecommendation
X
Xiang Wang
School of Artificial Intelligence and Data Science, University of Science and Technology of China, Hefei 230026, China