Adversarial Label Invariant Graph Data Augmentations for Out-of-Distribution Generalization

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of out-of-distribution (OoD) generalization on graph data under covariate shift—where training and test input distributions differ while label semantics remain unchanged—by proposing the RIA method. RIA introduces, for the first time, a Q-learning-inspired adversarial exploration mechanism into graph data augmentation, generating label-preserving adversarial perturbations to simulate unknown environments. It incorporates a constrained optimization framework solved via an alternating gradient descent-ascent algorithm to prevent overfitting to the training distribution. RIA is designed to seamlessly integrate with various existing OoD approaches and demonstrates significant performance gains over current baselines across multiple synthetic and real-world graph datasets, achieving superior OoD classification accuracy.
📝 Abstract
Out-of-distribution (OoD) generalization occurs when representation learning encounters a distribution shift. This occurs frequently in practice when training and testing data come from different environments. Covariate shift is a type of distribution shift that occurs only in the input data, while the concept distribution stays invariant. We propose RIA - Regularization for Invariance with Adversarial training, a new method for OoD generalization under convariate shift. Motivated by an analogy to $Q$-learning, it performs an adversarial exploration for training data environments. These new environments are induced by adversarial label invariant data augmentations that prevent a collapse to an in-distribution trained learner. It works with many existing OoD generalization methods for covariate shift that can be formulated as constrained optimization problems. We develop an alternating gradient descent-ascent algorithm to solve the problem, and perform extensive experiments on OoD graph classification for various kinds of synthetic and natural distribution shifts. We demonstrate that our method can achieve high accuracy compared with OoD baselines.
Problem

Research questions and friction points this paper is trying to address.

Out-of-Distribution Generalization
Covariate Shift
Graph Data Augmentation
Distribution Shift
Representation Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial augmentation
label invariance
out-of-distribution generalization
covariate shift
graph representation learning
🔎 Similar Papers
No similar papers found.