Towards A Universal Graph Structural Encoder

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph representation transfer across domains is hindered by substantial topological discrepancies and the difficulty of modeling structural complexity, especially for heterogeneous graphs (e.g., molecular, social, and citation graphs). Method: We propose the first general-purpose graph structure encoder capable of cross-domain transfer. Built upon a Graph Transformer architecture, it introduces a novel graph-inductive-bias-guided attention mechanism enabling multi-level, fine-grained structural modeling, along with theoretically expressive position and structural embeddings. The encoder is pretrained via multi-objective self-supervision, ensuring compatibility with diverse downstream encoders—including GNNs and large language models. Results: On both synthetic and real-world benchmarks, the encoder achieves significant performance gains with minimal fine-tuning: it attains state-of-the-art results in 81.6% of evaluated scenarios, spanning multiple graph models and datasets.

Technology Category

Application Category

📝 Abstract
Recent advancements in large-scale pre-training have shown the potential to learn generalizable representations for downstream tasks. In the graph domain, however, capturing and transferring structural information across different graph domains remains challenging, primarily due to the inherent differences in topological patterns across various contexts. Additionally, most existing models struggle to capture the complexity of rich graph structures, leading to inadequate exploration of the embedding space. To address these challenges, we propose GFSE, a universal graph structural encoder designed to capture transferable structural patterns across diverse domains such as molecular graphs, social networks, and citation networks. GFSE is the first cross-domain graph structural encoder pre-trained with multiple self-supervised learning objectives. Built on a Graph Transformer, GFSE incorporates attention mechanisms informed by graph inductive bias, enabling it to encode intricate multi-level and fine-grained topological features. The pre-trained GFSE produces generic and theoretically expressive positional and structural encoding for graphs, which can be seamlessly integrated with various downstream graph feature encoders, including graph neural networks for vectorized features and Large Language Models for text-attributed graphs. Comprehensive experiments on synthetic and real-world datasets demonstrate GFSE's capability to significantly enhance the model's performance while requiring substantially less task-specific fine-tuning. Notably, GFSE achieves state-of-the-art performance in 81.6% evaluated cases, spanning diverse graph models and datasets, highlighting its potential as a powerful and versatile encoder for graph-structured data.
Problem

Research questions and friction points this paper is trying to address.

Capturing transferable structural patterns across diverse graph domains
Encoding intricate multi-level and fine-grained topological features
Enhancing model performance with less task-specific fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-domain graph structural encoder GFSE
Multiple self-supervised learning objectives
Graph Transformer with inductive bias attention
🔎 Similar Papers
No similar papers found.
Jialin Chen
Jialin Chen
Yale University
Foundation ModelsGraph LearningMultimodal RAG
H
Haolan Zuo
Department of Computer Science, Yale University
H
Haoyu Peter Wang
Department of Electrical and Computer Engineering, Georgia Institute of Technology
Siqi Miao
Siqi Miao
Georgia Institute of Technology
Machine LearningGeometric Deep LearningGraph Neural NetworksAI for Science
P
Pan Li
Department of Electrical and Computer Engineering, Georgia Institute of Technology
R
Rex Ying
Department of Computer Science, Yale University