Hypergraph-Mlp: Learning on Hypergraphs Without Message Passing

📅 2023-12-15
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing message-passing-based hypergraph neural networks (HGNNs) suffer from over-smoothing, high inference latency, and sensitivity to structural perturbations when modeling high-order relationships. To address these issues, this paper proposes the first fully message-passing-free hypergraph learning framework: it eliminates explicit message propagation entirely and instead encodes hypergraph structural information as a signal smoothness regularization loss, enabling node representation learning solely via MLPs. This design fundamentally avoids over-smoothing, incurs zero structural dependency during inference, and significantly enhances both robustness and computational efficiency. On hypergraph node classification tasks, the method achieves performance competitive with state-of-the-art GNNs while reducing inference latency by an order of magnitude. Moreover, it demonstrates strong robustness against structural perturbations—including hyperedge removal, node deletion, and edge rewiring—without requiring retraining or architectural modification.
📝 Abstract
Hypergraphs are vital in modelling data with higher-order relations containing more than two entities, gaining prominence in machine learning and signal processing. Many hypergraph neural networks leverage message passing over hypergraph structures to enhance node representation learning, yielding impressive performances in tasks like hypergraph node classification. However, these message-passing-based models face several challenges, including oversmoothing as well as high latency and sensitivity to structural perturbations at inference time. To tackle those challenges, we propose an alternative approach where we integrate the information about hypergraph structures into training supervision without explicit message passing, thus also removing the reliance on it at inference. Specifically, we introduce Hypergraph-MLP, a novel learning framework for hypergraph-structured data, where the learning model is a straightforward multilayer perceptron (MLP) supervised by a loss function based on a notion of signal smoothness on hypergraphs. Experiments on hypergraph node classification tasks demonstrate that Hypergraph-MLP achieves competitive performance compared to existing baselines, and is considerably faster and more robust against structural perturbations at inference.
Problem

Research questions and friction points this paper is trying to address.

Addresses oversmoothing in hypergraph neural networks
Reduces latency and sensitivity to structural perturbations
Proposes Hypergraph-MLP for efficient hypergraph node classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hypergraph-MLP replaces message passing with MLP
Uses loss function based on hypergraph signal smoothness
Achieves fast, robust hypergraph node classification
🔎 Similar Papers
No similar papers found.