H$^3$GNNs: Harmonizing Heterophily and Homophily in GNNs via Joint Structural Node Encoding and Self-Supervised Learning

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) struggle to simultaneously model heterogeneous and homogeneous graph structures under self-supervised learning. To address this, we propose H³GNNs—a novel end-to-end framework that jointly encodes linear/nonlinear node features and K-hop structural representations for the first time. It introduces a Weighted Graph Convolutional Network (WGCN) and a cross-attention mechanism to achieve structure-aware node representation learning. Further, we design a dynamic masking strategy guided by node prediction difficulty and establish a teacher-student collaborative self-supervised training paradigm. Evaluated on four heterogeneous and three homogeneous graph benchmarks, H³GNNs achieves state-of-the-art (SOTA) performance on all heterogeneous datasets and matches SOTA on homogeneous ones, demonstrating significantly improved generalization across graph types. Key innovations include: (i) joint structure-feature encoding, (ii) difficulty-driven dynamic masking, and (iii) a teacher-student collaborative self-supervised mechanism.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) struggle to balance heterophily and homophily in representation learning, a challenge further amplified in self-supervised settings. We propose H$^3$GNNs, an end-to-end self-supervised learning framework that harmonizes both structural properties through two key innovations: (i) Joint Structural Node Encoding. We embed nodes into a unified space combining linear and non-linear feature projections with K-hop structural representations via a Weighted Graph Convolution Network(WGCN). A cross-attention mechanism enhances awareness and adaptability to heterophily and homophily. (ii) Self-Supervised Learning Using Teacher-Student Predictive Architectures with Node-Difficulty Driven Dynamic Masking Strategies. We use a teacher-student model, the student sees the masked input graph and predicts node features inferred by the teacher that sees the full input graph in the joint encoding space. To enhance learning difficulty, we introduce two novel node-predictive-difficulty-based masking strategies. Experiments on seven benchmarks (four heterophily datasets and three homophily datasets) confirm the effectiveness and efficiency of H$^3$GNNs across diverse graph types. Our H$^3$GNNs achieves overall state-of-the-art performance on the four heterophily datasets, while retaining on-par performance to previous state-of-the-art methods on the three homophily datasets.
Problem

Research questions and friction points this paper is trying to address.

Balancing heterophily and homophily in GNN representation learning
Enhancing self-supervised learning with joint structural node encoding
Improving adaptability via dynamic masking strategies in teacher-student models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint structural node encoding with WGCN
Self-supervised teacher-student predictive architecture
Node-difficulty driven dynamic masking strategies
🔎 Similar Papers
No similar papers found.