Rethinking Graph Out-Of-Distribution Generalization: A Learnable Random Walk Perspective

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) suffer significant performance degradation under distributional shift, as existing out-of-distribution (OOD) generalization methods rely on fixed graph topologies or spectral structures to enforce invariance—assumptions misaligned with real-world distributional changes. This paper proposes a learnable random walk (LRW) perspective, modeling cross-distribution invariant knowledge as data-driven walk dynamics, thereby abandoning rigid structural assumptions. Our approach features two core innovations: (1) a jointly parameterized LRW-sampler and path encoder that learns node transition matrices end-to-end; and (2) a mutual information maximization loss based on kernel density estimation (KDE), explicitly enforcing discriminability and robustness of walk paths under OOD conditions. Evaluated across multiple graph OOD benchmarks, our method achieves an average accuracy improvement of 3.87% over state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Out-Of-Distribution (OOD) generalization has gained increasing attentions for machine learning on graphs, as graph neural networks (GNNs) often exhibit performance degradation under distribution shifts. Existing graph OOD methods tend to follow the basic ideas of invariant risk minimization and structural causal models, interpreting the invariant knowledge across datasets under various distribution shifts as graph topology or graph spectrum. However, these interpretations may be inconsistent with real-world scenarios, as neither invariant topology nor spectrum is assured. In this paper, we advocate the learnable random walk (LRW) perspective as the instantiation of invariant knowledge, and propose LRW-OOD to realize graph OOD generalization learning. Instead of employing fixed probability transition matrix (i.e., degree-normalized adjacency matrix), we parameterize the transition matrix with an LRW-sampler and a path encoder. Furthermore, we propose the kernel density estimation (KDE)-based mutual information (MI) loss to generate random walk sequences that adhere to OOD principles. Extensive experiment demonstrates that our model can effectively enhance graph OOD generalization under various types of distribution shifts and yield a significant accuracy improvement of 3.87% over state-of-the-art graph OOD generalization baselines.
Problem

Research questions and friction points this paper is trying to address.

Addressing performance degradation of GNNs under graph distribution shifts
Proposing learnable random walks for invariant knowledge in OOD generalization
Enhancing OOD generalization via KDE-based mutual information loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable random walk for graph OOD generalization
Parameterized transition matrix with LRW-sampler
KDE-based mutual information loss for OOD principles
🔎 Similar Papers
No similar papers found.