🤖 AI Summary
This work addresses the challenging problem of cross-dataset fully inductive reasoning on graph models—i.e., generalizing to entirely unseen graphs with significantly heterogeneous feature space dimensions and semantics, under strict no-retraining constraints. To this end, we propose the “view-space” representation paradigm, introducing Graph View Transformation (GVT) and its recurrent variant, Recurrent GVT—the first architectures achieving strictly node- and feature-permutation equivariant fully inductive modeling. By integrating view-space representation learning, permutation-equivariant neural networks, and a no-retraining transfer mechanism, our method achieves an 8.93% absolute improvement over the prior state-of-the-art fully inductive model GraphAny and outperforms tuned GNNs by 3.30% on average across 27 node classification benchmarks. It fundamentally overcomes the representational bottleneck imposed by feature heterogeneity in graph generalization.
📝 Abstract
Generalizing a pretrained model to unseen datasets without retraining is an essential step toward a foundation model. However, achieving such cross-dataset, fully inductive inference is difficult in graph-structured data where feature spaces vary widely in both dimensionality and semantics. Any transformation in the feature space can easily violate the inductive applicability to unseen datasets, strictly limiting the design space of a graph model. In this work, we introduce the view space, a novel representational axis in which arbitrary graphs can be naturally encoded in a unified manner. We then propose Graph View Transformation (GVT), a node- and feature-permutation-equivariant mapping in the view space. GVT serves as the building block for Recurrent GVT, a fully inductive model for node representation learning. Pretrained on OGBN-Arxiv and evaluated on 27 node-classification benchmarks, Recurrent GVT outperforms GraphAny, the prior fully inductive graph model, by +8.93% and surpasses 12 individually tuned GNNs by at least +3.30%. These results establish the view space as a principled and effective ground for fully inductive node representation learning.