🤖 AI Summary
To address weak cross-domain transferability, heavy reliance on labeled data, and challenges in modeling heterogeneous graphs in graph learning, this paper proposes a structure-aware self-supervised learning framework for text-attributed graphs. Methodologically, it integrates large language models (LLMs) with graph neural networks (GNNs) to unify heterogeneous feature spaces via textual semantics; introduces a dual knowledge distillation mechanism that jointly compresses LLM and GNN knowledge into a lightweight MLP; and incorporates a memory-based representation alignment module to store prototypical graph representations, thereby enhancing generalization. Experiments demonstrate that the method significantly outperforms state-of-the-art approaches on cross-domain transfer tasks, reducing inference cost by 42% and model parameters by 87%, while maintaining competitive accuracy. The framework thus achieves superior efficiency, scalability, and transfer capability.
📝 Abstract
Large scale pretrained models have revolutionized Natural Language Processing (NLP) and Computer Vision (CV), showcasing remarkable cross domain generalization abilities. However, in graph learning, models are typically trained on individual graph datasets, limiting their capacity to transfer knowledge across different graphs and tasks. This approach also heavily relies on large volumes of annotated data, which presents a significant challenge in resource-constrained settings. Unlike NLP and CV, graph structured data presents unique challenges due to its inherent heterogeneity, including domain specific feature spaces and structural diversity across various applications. To address these challenges, we propose a novel structure aware self supervised learning method for Text Attributed Graphs (SSTAG). By leveraging text as a unified representation medium for graph learning, SSTAG bridges the gap between the semantic reasoning of Large Language Models (LLMs) and the structural modeling capabilities of Graph Neural Networks (GNNs). Our approach introduces a dual knowledge distillation framework that co-distills both LLMs and GNNs into structure-aware multilayer perceptrons (MLPs), enhancing the scalability of large-scale TAGs. Additionally, we introduce an in-memory mechanism that stores typical graph representations, aligning them with memory anchors in an in-memory repository to integrate invariant knowledge, thereby improving the model's generalization ability. Extensive experiments demonstrate that SSTAG outperforms state-of-the-art models on cross-domain transfer learning tasks, achieves exceptional scalability, and reduces inference costs while maintaining competitive performance.