Scalable Heterogeneous Graph Learning via Heterogeneous-aware Orthogonal Prototype Experts

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing heterogeneous graph neural networks (HGNNs), which rely on a single shared linear decoding head and thus struggle to capture fine-grained semantic relationships, often overfitting to central nodes while neglecting long-tail ones. To overcome this, we propose HOPE—a plug-and-play heterogeneous graph decoding framework that dynamically routes nodes to semantically aligned expert decoders via a heterogeneity-aware prototype-based routing mechanism. HOPE further enforces orthogonality constraints among experts to enhance diversity and prevent expert collapse. By integrating prototype learning with a Mixture-of-Experts architecture, HOPE consistently boosts the performance of multiple state-of-the-art HGNN backbones across four real-world datasets, achieving significant gains with minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Heterogeneous Graph Neural Networks(HGNNs) have advanced mainly through better encoders, yet their decoding/projection stage still relies on a single shared linear head, assuming it can map rich node embeddings to labels. We call this the Linear Projection Bottleneck: in heterogeneous graphs, contextual diversity and long-tail shifts make a global head miss fine semantics, overfit hub nodes, and underserve tail nodes. While Mixture-of-Experts(MoE) could help, naively applying it clashes with structural imbalance and risks expert collapse. We propose a Heterogeneous-aware Orthogonal Prototype Experts framework named HOPE, a plug-and-play replacement for the standard prediction head. HOPE uses learnable prototype-based routing to assign instances to experts by similarity, letting expert usage follow the natural long-tail distribution, and adds expert orthogonalization to encourage diversity and prevent collapse. Experiments on four real datasets show consistent gains across SOTA HGNN backbones with minimal overhead.
Problem

Research questions and friction points this paper is trying to address.

Heterogeneous Graph Neural Networks
Linear Projection Bottleneck
Long-tail Distribution
Expert Collapse
Structural Imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous Graph Neural Networks
Mixture-of-Experts
Prototype-based Routing
Expert Orthogonalization
Long-tail Distribution
🔎 Similar Papers
No similar papers found.
Wei Zhou
Wei Zhou
Huazhong University of Science and Technology
IoT SecuritySystem SecurityHardware Security
Hong Huang
Hong Huang
Associate Professor, Huazhong University of Science and Technology
data miningbig data analysis
R
Ruize Shi
National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
Bang Liu
Bang Liu
Associate Professor at the University of Montreal, Canada CIFAR AI Chair at Mila
Natural Language ProcessingDeep LearningMachine LearningData Mining