Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment

๐Ÿ“… 2025-02-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the failure of cross-domain knowledge transfer in multi-domain graph foundation modelsโ€”caused by substantial topological heterogeneity, data sparsity, and vulnerability to noise and adversarial attacks. We propose the first topology-aligned, unified multi-domain graph foundation model framework. Methodologically: (1) we design an adaptive feature-topology co-alignment mechanism to mitigate inter-domain structural heterogeneity; (2) we develop a provably secure graph structure purification module to enhance robustness against perturbations; and (3) we introduce a lightweight graph-structure prompting fine-tuning strategy for efficient cross-domain generalization. Empirical evaluation on both homogeneous and heterogeneous graph benchmarks demonstrates significant improvements in cross-domain transfer performance, alongside enhanced noise and adversarial attack resilience. Furthermore, we provide theoretical analysis establishing an upper bound on the generalization error, offering formal guarantees for model generalizability.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in CV and NLP have inspired researchers to develop general-purpose graph foundation models through pre-training across diverse domains. However, a fundamental challenge arises from the substantial differences in graph topologies across domains. Additionally, real-world graphs are often sparse and prone to noisy connections and adversarial attacks. To address these issues, we propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information to facilitate robust knowledge transfer. MDGFM bridges different domains by adaptively balancing features and topology while refining original graphs to eliminate noise and align topological structures. To further enhance knowledge transfer, we introduce an efficient prompt-tuning approach. By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains. Theoretical analyses provide guarantees of MDGFM's effectiveness and domain generalization capabilities. Extensive experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of our method.
Problem

Research questions and friction points this paper is trying to address.

Aligns diverse graph topologies across domains
Enhances robust knowledge transfer in graph models
Eliminates noise and aligns topological structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns cross-domain topological information
Refines graphs to eliminate noise
Introduces efficient prompt-tuning approach
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Shuo Wang
University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China
B
Bokui Wang
University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China
Zhixiang Shen
Zhixiang Shen
University of Electronic Science and Technology of China
graph neural networksgraph foundation modelllm
B
Boyan Deng
University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China
Z
Zhao Kang
University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China