RiemannGFM: Learning a Graph Foundation Model from Riemannian Geometry

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural networks (GNNs) exhibit weak cross-domain transferability and struggle with complex, text-free graph structures. Method: This paper introduces the first general-purpose graph foundation model grounded in Riemannian geometry. Its core innovations include: (i) identifying trees and cycles as a universal “graph vocabulary” and establishing their intrinsic correspondence with negatively curved Riemannian manifolds; (ii) designing a product fiber bundle to unify the geometric modeling of heterogeneous graphs; and (iii) integrating Riemannian gradient optimization with non-Euclidean GNNs for structural-aware representation learning. Results: Evaluated on multiple real-world, multi-source graph datasets, the proposed model consistently outperforms state-of-the-art methods, demonstrating superior cross-domain transferability and structural generalization capability—particularly on graphs lacking textual node/edge attributes.

Technology Category

Application Category

📝 Abstract
The foundation model has heralded a new era in artificial intelligence, pretraining a single model to offer cross-domain transferability on different datasets. Graph neural networks excel at learning graph data, the omnipresent non-Euclidean structure, but often lack the generalization capacity. Hence, graph foundation model is drawing increasing attention, and recent efforts have been made to leverage Large Language Models. On the one hand, existing studies primarily focus on text-attributed graphs, while a wider range of real graphs do not contain fruitful textual attributes. On the other hand, the sequential graph description tailored for the Large Language Model neglects the structural complexity, which is a predominant characteristic of the graph. Such limitations motivate an important question: Can we go beyond Large Language Models, and pretrain a universal model to learn the structural knowledge for any graph? The answer in the language or vision domain is a shared vocabulary. We observe the fact that there also exist shared substructures underlying graph domain, and thereby open a new opportunity of graph foundation model with structural vocabulary. The key innovation is the discovery of a simple yet effective structural vocabulary of trees and cycles, and we explore its inherent connection to Riemannian geometry. Herein, we present a universal pretraining model, RiemannGFM. Concretely, we first construct a novel product bundle to incorporate the diverse geometries of the vocabulary. Then, on this constructed space, we stack Riemannian layers where the structural vocabulary, regardless of specific graph, is learned in Riemannian manifold offering cross-domain transferability. Extensive experiments show the effectiveness of RiemannGFM on a diversity of real graphs.
Problem

Research questions and friction points this paper is trying to address.

Develops a universal graph foundation model
Leverages Riemannian geometry for structural learning
Addresses cross-domain graph transferability challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Riemannian geometry for graphs
Introduces structural vocabulary of trees
Develops universal pretraining model RiemannGFM
🔎 Similar Papers
No similar papers found.
L
Li Sun
North China Electric Power University, Beijing, China
Z
Zhenhao Huang
North China Electric Power University, Beijing, China
S
Suyang Zhou
North China Electric Power University, Beijing, China
Qiqi Wan
Qiqi Wan
North China Electric Power University
H
Hao Peng
Beihang University, Beijing, China
P
Philip Yu
University of Illinois, Chicago, USA