π€ AI Summary
To address the challenges of modeling long-range dependencies and unstable deep-layer training in hypergraph neural networks (HGNNs), this paper proposes Implicit Hypergraph Neural Networks (IHGNN)βthe first implicit equilibrium model for hypergraph learning. IHGNN achieves depth-invariant global information propagation by solving a nonlinear fixed-point equation. Methodologically, it integrates transductive message passing, implicit differentiation for training, and a projection-based stabilization strategy to ensure convergence and numerical robustness. Theoretical contributions include provable fixed-point convergence, generalization error bounds, and an analysis of over-smoothing. Empirically, IHGNN significantly outperforms state-of-the-art graph and hypergraph models on multiple citation network benchmarks, demonstrating superior initialization robustness, hyperparameter stability, and resistance to over-smoothing.
π Abstract
Many real-world interactions are group-based rather than pairwise such as papers with multiple co-authors and users jointly engaging with items. Hypergraph neural networks have shown great promise at modeling higher-order relations, but their reliance on a fixed number of explicit message-passing layers limits long-range dependency capture and can destabilize training as depth grows. In this work, we introduce Implicit Hypergraph Neural Networks (IHGNN), which bring the implicit equilibrium formulation to hypergraphs: instead of stacking layers, IHGNN computes representations as the solution to a nonlinear fixed-point equation, enabling stable and efficient global propagation across hyperedges without deep architectures. We develop a well-posed training scheme with provable convergence, analyze the oversmoothing conditions and expressivity of the model, and derive a transductive generalization bound on hypergraphs. We further present an implicit-gradient training procedure coupled with a projection-based stabilization strategy. Extensive experiments on citation benchmarks show that IHGNN consistently outperforms strong traditional graph/hypergraph neural network baselines in both accuracy and robustness. Empirically, IHGNN is resilient to random initialization and hyperparameter variation, highlighting its strong generalization and practical value for higher-order relational learning.