🤖 AI Summary
Shallow node embedding methods (e.g., node2vec) are inherently transductive and fail to generalize to unseen nodes at inference time. To address this limitation, we propose the first inductive extension framework for node2vec, jointly designed with a *pre-adaptation training strategy* and a *neighborhood-aggregation post-processing mechanism*, enabling zero-shot embedding generation for novel nodes. Our approach incorporates refined random walk sampling and embedding space alignment optimization, supporting plug-and-play deployment and multi-embedding fusion. Evaluated on multiple benchmark datasets, our method achieves an average 1.0-percentage-point improvement—and up to 6.0 points—in node classification accuracy, significantly enhancing adaptability to dynamic and open-world graph structures while preserving the efficiency and interpretability of shallow embedding paradigms.
📝 Abstract
Shallow node embeddings like node2vec (N2V) can be used for nodes without features or to supplement existing features with structure-based information. Embedding methods like N2V are limited in their application on new nodes, which restricts them to the transductive setting where the entire graph, including the test nodes, is available during training. We propose inductive node2vec (iN2V), which combines a post-hoc procedure to compute embeddings for nodes unseen during training and modifications to the original N2V training procedure to prepare the embeddings for this post-hoc procedure. We conduct experiments on several benchmark datasets and demonstrate that iN2V is an effective approach to bringing transductive embeddings to an inductive setting. Using iN2V embeddings improves node classification by 1 point on average, with up to 6 points of improvement depending on the dataset and the number of unseen nodes. Our iN2V is a plug-in approach to create new or enrich existing embeddings. It can also be combined with other embedding methods, making it a versatile approach for inductive node representation learning. Code to reproduce the results is available at https://github.com/Foisunt/iN2V .