Fully Inductive Node Representation Learning via Graph View Transformation

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of cross-dataset fully inductive reasoning on graph models—i.e., generalizing to entirely unseen graphs with significantly heterogeneous feature space dimensions and semantics, under strict no-retraining constraints. To this end, we propose the “view-space” representation paradigm, introducing Graph View Transformation (GVT) and its recurrent variant, Recurrent GVT—the first architectures achieving strictly node- and feature-permutation equivariant fully inductive modeling. By integrating view-space representation learning, permutation-equivariant neural networks, and a no-retraining transfer mechanism, our method achieves an 8.93% absolute improvement over the prior state-of-the-art fully inductive model GraphAny and outperforms tuned GNNs by 3.30% on average across 27 node classification benchmarks. It fundamentally overcomes the representational bottleneck imposed by feature heterogeneity in graph generalization.

Technology Category

Application Category

📝 Abstract
Generalizing a pretrained model to unseen datasets without retraining is an essential step toward a foundation model. However, achieving such cross-dataset, fully inductive inference is difficult in graph-structured data where feature spaces vary widely in both dimensionality and semantics. Any transformation in the feature space can easily violate the inductive applicability to unseen datasets, strictly limiting the design space of a graph model. In this work, we introduce the view space, a novel representational axis in which arbitrary graphs can be naturally encoded in a unified manner. We then propose Graph View Transformation (GVT), a node- and feature-permutation-equivariant mapping in the view space. GVT serves as the building block for Recurrent GVT, a fully inductive model for node representation learning. Pretrained on OGBN-Arxiv and evaluated on 27 node-classification benchmarks, Recurrent GVT outperforms GraphAny, the prior fully inductive graph model, by +8.93% and surpasses 12 individually tuned GNNs by at least +3.30%. These results establish the view space as a principled and effective ground for fully inductive node representation learning.
Problem

Research questions and friction points this paper is trying to address.

Develop a fully inductive graph model for unseen datasets
Address feature space variation in cross-dataset graph inference
Unify node representation learning via view space transformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph View Transformation for unified graph encoding
Recurrent GVT enables fully inductive node representation learning
View space as principled ground for cross-dataset generalization
🔎 Similar Papers
No similar papers found.
D
Dooho Lee
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)
M
Myeong Kong
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)
M
Minho Jeong
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)
Jaemin Yoo
Jaemin Yoo
Assistant Professor, KAIST
Data MiningMachine LearningGraph Neural NetworksTime Series Analysis