Multi-Domain Riemannian Graph Gluing for Building Graph Foundation Models

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical foundations in existing graph pre-training methods for cross-domain knowledge transfer, which hinders their generalization across diverse domains. The authors propose a Neural Manifold Stitching framework that models multiple graph datasets as a unified Riemannian manifold. By constructing local geometric representations through adaptive orthogonal frames, they establish— for the first time—a geometrically consistent transfer mechanism for graph foundation models. Integrating EMA-based prototype batch pre-training with a novel transferability metric, the method significantly outperforms current approaches on multi-domain graph tasks. Furthermore, the study uncovers a geometric scaling law linking dataset scale and manifold smoothness, which effectively enhances model transfer performance.

Technology Category

Application Category

📝 Abstract
Multi-domain graph pre-training integrates knowledge from diverse domains to enhance performance in the target domains, which is crucial for building graph foundation models. Despite initial success, existing solutions often fall short of answering a fundamental question: how is knowledge integrated or transferred across domains? This theoretical limitation motivates us to rethink the consistency and transferability between model pre-training and domain adaptation. In this paper, we propose a fresh Riemannian geometry perspective, whose core idea is to merge any graph dataset into a unified, smooth Riemannian manifold, enabling a systematic understanding of knowledge integration and transfer. To achieve this, our key contribution is the theoretical establishment of neural manifold gluing, which first characterizes local geometry using an adaptive orthogonal frame and then "glues" the local pieces together into a coherent whole. Building on this theory, we present the GraphGlue framework, which supports batched pre-training with EMA prototyping and provides a transferability measure based on geometric consistence. Extensive experiments demonstrate its superior performance across diverse graph domains. Moreover, we empirically validated GraphGlue's geometric scaling law, showing that larger quantities of datasets improve model transferability by producing a smoother manifold. Codes are available at https://github.com/RiemannGraph/GraphGlue.
Problem

Research questions and friction points this paper is trying to address.

multi-domain graph pre-training
knowledge transfer
graph foundation models
domain adaptation
Riemannian manifold
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian geometry
manifold gluing
graph foundation models
cross-domain transfer
geometric consistency
🔎 Similar Papers
No similar papers found.
Li Sun
Li Sun
East China Normal University
Image processingComputer vision
Zhenhao Huang
Zhenhao Huang
华北电力大学
Data MiningDeep LearningMachine LearningGraph Neural Networks
S
Silei Chen
North China Electric Power University, Beijing 102206, China
L
Lanxu Yang
North China Electric Power University, Beijing 102206, China
J
Junda Ye
Beijing University of Posts and Telecommunications, Beijing 100876, China
S
Sen Su
Beijing University of Posts and Telecommunications, Beijing 100876, China
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy