H$^2$GFM: Towards unifying Homogeneity and Heterogeneity on Text-Attributed Graphs

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph foundation models (GFMs) are limited to homogeneous text-attributed graphs (HoTAGs) and struggle to model heterogeneous text-attributed graphs (HeTAGs) with diverse node and edge types. To address this, we propose the first unified GFM framework. Methodologically, we introduce a Context-adaptive Graph Transformer (CGT) that jointly aligns textual meta-relations and encodes contextual semantics for higher-order representation learning; design a Mixture-of-Experts CGT mechanism to collaboratively capture both local neighborhood structures and global heterogeneity; and employ joint pretraining via contrastive learning and masked node reconstruction. Extensive experiments on multiple HoTAGs and HeTAGs benchmarks—as well as zero-shot and few-shot transfer tasks—demonstrate consistent and significant improvements over state-of-the-art methods, with up to 12.7% gain in generalization performance. The framework exhibits strong robustness across graph types and downstream tasks.

Technology Category

Application Category

📝 Abstract
The growing interests and applications of graph learning in diverse domains have propelled the development of a unified model generalizing well across different graphs and tasks, known as the Graph Foundation Model (GFM). Existing research has leveraged text-attributed graphs (TAGs) to tackle the heterogeneity in node features among graphs. However, they primarily focus on homogeneous TAGs (HoTAGs), leaving heterogeneous TAGs (HeTAGs), where multiple types of nodes/edges reside, underexplored. To enhance the capabilities and applications of GFM, we introduce H$^2$GFM, a novel framework designed to generalize across both HoTAGs and HeTAGs. Our model projects diverse meta-relations among graphs under a unified textual space, and employs a context encoding to capture spatial and higher-order semantic relationships. To achieve robust node representations, we propose a novel context-adaptive graph transformer (CGT), effectively capturing information from both context neighbors and their relationships. Furthermore, we employ a mixture of CGT experts to capture the heterogeneity in structural patterns among graph types. Comprehensive experiments on a wide range of HoTAGs and HeTAGs as well as learning scenarios demonstrate the effectiveness of our model.
Problem

Research questions and friction points this paper is trying to address.

Unifying homogeneous and heterogeneous text-attributed graphs for generalization
Addressing underexplored heterogeneous TAGs with multiple node/edge types
Enhancing Graph Foundation Models with robust context-adaptive representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies Homogeneous and Heterogeneous Text-Attributed Graphs
Projects meta-relations under unified textual space
Employs context-adaptive graph transformer for robust representations
🔎 Similar Papers
No similar papers found.