🤖 AI Summary
Existing graph pre-training methods struggle in cross-domain scenarios due to coarse semantic alignment and insufficient training signals, hindering the acquisition of generalizable knowledge from highly heterogeneous graph data. This work proposes LEDA, a novel framework that introduces, for the first time, a latent semantic distribution alignment mechanism. LEDA adaptively projects multi-domain graph features into a shared semantic space and employs a variational semantic inference module to model the cross-domain shared latent semantic distribution, enabling fine-grained guidance for inter-domain alignment. The proposed approach substantially enhances cross-domain graph pre-training performance, achieving state-of-the-art results across diverse graph datasets and downstream tasks, with particularly significant gains in few-shot cross-domain settings compared to existing baselines and advanced general-purpose pre-trained models.
📝 Abstract
Recent advances in generic large models, such as GPT and DeepSeek, have motivated the introduction of universality to graph pre-training, aiming to learn rich and generalizable knowledge across diverse domains using graph representations to improve performance in various downstream applications. However, most existing methods face challenges in learning effective knowledge from generic graphs, primarily due to simplistic data alignment and limited training guidance. The issue of simplistic data alignment arises from the use of a straightforward unification for highly diverse graph data, which fails to align semantics and misleads pre-training models. The problem with limited training guidance lies in the arbitrary application of in-domain pre-training paradigms to cross-domain scenarios. While it is effective in enhancing discriminative representation in one data space, it struggles to capture effective knowledge from many graphs. To address these challenges, we propose a novel Latent sEmantic Distribution Alignment (LEDA) model for universal graph pre-training. Specifically, we first introduce a dimension projection unit to adaptively align diverse domain features into a shared semantic space with minimal information loss. Furthermore, we design a variational semantic inference module to obtain the shared latent distribution. The distribution is then adopted to guide the domain projection, aligning it with shared semantics across domains and ensuring cross-domain semantic learning. LEDA exhibits strong performance across a broad range of graphs and downstream tasks. Remarkably, in few-shot cross-domain settings, it significantly outperforms in-domain baselines and advanced universal pre-training models.