🤖 AI Summary
Positional and structural encodings (PSEs) are increasingly adopted in graph neural networks (GNNs), yet their viability as universal foundational representations—particularly regarding cross-dataset fine-tuning efficiency, few-shot scalability, and generalization—remains systematically unexplored.
Method: We propose the first comprehensive evaluation framework for PSEs, built on a multi-graph benchmark and integrating learnable PSE embedding, downstream GNN augmentation, and few-shot fine-tuning strategies.
Contribution/Results: Experiments demonstrate that PSEs consistently boost performance across diverse GNN architectures, exhibiting strong task adaptability and promising lightweight pretraining potential. However, we also identify fundamental expressivity limits: certain complex graph structures necessitate bespoke PSE designs to achieve optimal performance. This work provides the first empirical validation of PSEs as practical, yet inherently bounded, core components of graph foundation models—establishing both their utility and intrinsic limitations in foundational graph representation learning.
📝 Abstract
Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced their performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pre-trained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.