🤖 AI Summary
Existing hypergraph neural networks (HGNNs) suffer from limited representational capacity and generalization due to reliance on a single attention mechanism and neglect of node overlap heterogeneity. To address this, we propose an overlap-aware meta-learning attention framework. First, we introduce a novel joint attention mechanism that simultaneously models structural and feature-level similarities. Second, we formulate multi-task learning by partitioning tasks based on node overlap degree and design a Meta-Weight-Net to dynamically learn task-specific weights. Third, we enable collaborative end-to-end training of both inner- and outer-loop models. Extensive experiments on six real-world datasets demonstrate that our method consistently outperforms nine state-of-the-art baselines, achieving significant improvements in node classification accuracy and embedding quality. These results validate the effectiveness of jointly modeling node overlap and integrating meta-learning with attention mechanisms.
📝 Abstract
Although hypergraph neural networks (HGNNs) have emerged as a powerful framework for analyzing complex datasets, their practical performance often remains limited. On one hand, existing networks typically employ a single type of attention mechanism, focusing on either structural or feature similarities during message passing. On the other hand, assuming that all nodes in current hypergraph models have the same level of overlap may lead to suboptimal generalization. To overcome these limitations, we propose a novel framework, overlap-aware meta-learning attention for hypergraph neural networks (OMA-HGNN). First, we introduce a hypergraph attention mechanism that integrates both structural and feature similarities. Specifically, we linearly combine their respective losses with weighted factors for the HGNN model. Second, we partition nodes into different tasks based on their diverse overlap levels and develop a multi-task Meta-Weight-Net (MWN) to determine the corresponding weighted factors. Third, we jointly train the internal MWN model with the losses from the external HGNN model and train the external model with the weighted factors from the internal model. To evaluate the effectiveness of OMA-HGNN, we conducted experiments on six real-world datasets and benchmarked its perfor-mance against nine state-of-the-art methods for node classification. The results demonstrate that OMA-HGNN excels in learning superior node representations and outperforms these baselines.