🤖 AI Summary
Existing heterogeneous graph neural networks (HGNNs) suffer from parameter explosion and relation collapse—where the number of learnable parameters grows quadratically with the number of relation types, and discriminative capacity across relations degrades—hindering scalability to large-scale heterogeneous graphs with abundant relation schemas. To address this, we propose the Blend&Grind mechanism: a lightweight, scalable HGNN framework that models diverse relations efficiently using a single shared parameter set. It operates in a unified feature space via differentiable relation blending (Blend) and hierarchical relation refinement (Grind), augmented by relation-aware projection and structured regularization. Evaluated on multiple benchmarks, our method reduces model parameters to just 1/28.96 of the prior state-of-the-art, accelerates training throughput by 8.12×, and improves node classification accuracy by up to 7%, thereby substantially overcoming the scalability bottleneck of HGNNs.
📝 Abstract
Many computer vision and machine learning problems are modelled as learning tasks on heterogeneous graphs, featuring a wide array of relations from diverse types of nodes and edges. Heterogeneous graph neural networks (HGNNs) stand out as a promising neural model class designed for heterogeneous graphs. Built on traditional GNNs, existing HGNNs employ different parameter spaces to model the varied relationships. However, the practical effectiveness of existing HGNNs is often limited to simple heterogeneous graphs with few relation types. This paper first highlights and demonstrates that the standard approach employed by existing HGNNs inevitably leads to parameter explosion and relation collapse, making HGNNs less effective or impractical for complex heterogeneous graphs with numerous relation types. To overcome this issue, we introduce a novel framework, Blend&Grind-HGNN (BG-HGNN), which effectively tackles the challenges by carefully integrating different relations into a unified feature space manageable by a single set of parameters. This results in a refined HGNN method that is more efficient and effective in learning from heterogeneous graphs, especially when the number of relations grows. Our empirical studies illustrate that BG-HGNN significantly surpasses existing HGNNs in terms of parameter efficiency (up to 28.96 $ imes$), training throughput (up to 8.12 $ imes$), and accuracy (up to 1.07 $ imes$).