Out-of-Distribution Graph Models Merging

๐Ÿ“… 2025-06-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the challenge of out-of-distribution (OOD) graph model merging without access to raw source or target domain data. Methodologically, it introduces an architecture-agnostic fusion paradigm grounded in implicit domain-invariant knowledge, enabling cross-domain graph generation; integrates a plug-and-play Mixture-of-Experts (MoE) module with parameter-mask fine-tuning for generalized merging and adaptive optimization of heterogeneous graph neural networks; and incorporates distributionally robust optimization alongside theoretical generalization error bound analysis. Empirically, the approach achieves an average accuracy gain of 8.2% across multiple cross-domain graph benchmarks, substantially outperforming existing data-free fusion methods. Theoretically, it establishes a tighter upper bound on generalization error, providing formal guarantees for model performance under distribution shift. Collectively, this work advances a novel federated knowledge integration paradigm for graph modelsโ€”enabling robust, scalable, and theoretically principled ensemble learning without data sharing.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper studies a novel problem of out-of-distribution graph models merging, which aims to construct a generalized model from multiple graph models pre-trained on different domains with distribution discrepancy. This problem is challenging because of the difficulty in learning domain-invariant knowledge implicitly in model parameters and consolidating expertise from potentially heterogeneous GNN backbones. In this work, we propose a graph generation strategy that instantiates the mixture distribution of multiple domains. Then, we merge and fine-tune the pre-trained graph models via a MoE module and a masking mechanism for generalized adaptation. Our framework is architecture-agnostic and can operate without any source/target domain data. Both theoretical analysis and experimental results demonstrate the effectiveness of our approach in addressing the model generalization problem.
Problem

Research questions and friction points this paper is trying to address.

Merging graph models from different domains with distribution discrepancy
Learning domain-invariant knowledge from heterogeneous GNN backbones
Generalizing models without source/target domain data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph generation strategy for domain mixture
MoE module and masking mechanism
Architecture-agnostic framework without domain data
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yidi Wang
School of Computing and Information Technology, Great Bay University
Jiawei Gu
Jiawei Gu
Sun Yat-sen University
Natural language processingMultimodal reasoning
X
Xiaobing Pei
School of Software Engineering, Huazhong University of Science and Technology
Xubin Zheng
Xubin Zheng
Great Bay University
Bioinformatics and Computational Biology
X
Xiao Luo
Department of Computer Science, University of California, Los Angeles
Pengyang Wang
Pengyang Wang
Assistant Professor, University of Macau
data miningrepresentation learningurban computing
Ziyue Qiao
Ziyue Qiao
Assistant Professor, Great Bay University
Data MiningGraph Machine LearningKnowledge GraphAI for Science