🤖 AI Summary
Graph Neural Networks (GNNs) face critical bottlenecks in cross-task and cross-domain transfer, including poor knowledge adaptability, severe negative transfer, and the need for model or graph-structure modifications. To address these challenges, we propose a task- and domain-agnostic GNN transfer learning framework. Our approach integrates pretrained GNN enhancement, a dynamic prediction head, a novel bridging network—designed to fuse source and co-trained models—and joint source-target optimization. This enables seamless transfer across arbitrary graph structures and output dimensions, without altering the original model architecture or graph topology. We evaluate our method across 16 benchmark datasets and four transfer paradigms: Graph-to-Graph, Node-to-Node, Graph-to-Node, and Graph-to-Point-Cloud. It consistently outperforms state-of-the-art methods, establishing the first truly universal, plug-and-play GNN knowledge transfer solution.
📝 Abstract
Graph neural networks (GNNs) are conventionally trained on a per-domain, per-task basis. It creates a significant barrier in transferring the acquired knowledge to different, heterogeneous data setups. This paper introduces GraphBridge, a novel framework to enable knowledge transfer across disparate tasks and domains in GNNs, circumventing the need for modifications to task configurations or graph structures. Specifically, GraphBridge allows for the augmentation of any pre-trained GNN with prediction heads and a bridging network that connects the input to the output layer. This architecture not only preserves the intrinsic knowledge of the original model but also supports outputs of arbitrary dimensions. To mitigate the negative transfer problem, GraphBridg merges the source model with a concurrently trained model, thereby reducing the source bias when applied to the target domain. Our method is thoroughly evaluated across diverse transfer learning scenarios, including Graph2Graph, Node2Node, Graph2Node, and graph2point-cloud. Empirical validation, conducted over 16 datasets representative of these scenarios, confirms the framework's capacity for task- and domain-agnostic transfer learning within graph-like data, marking a significant advancement in the field of GNNs.