Directional Sheaf Hypergraph Networks: Unifying Learning on Directed and Undirected Hypergraphs

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hypergraph neural networks struggle to model directed high-order interactions and exhibit limited performance under heterophily. Sheaf Neural Networks (SNNs) likewise only support undirected hypergraphs. To address these limitations, we first extend sheaf theory to directed hypergraphs, introducing the *directed sheaf hypergraph Laplacian*—a unified generalization of graph and hypergraph Laplacians. Building upon this, we propose a complex-valued Laplacian-driven neural network framework capable of jointly modeling both directed and undirected hypergraphs. Our approach explicitly encodes directional relationships among nodes, substantially enhancing representational capacity in heterophilous settings. Extensive experiments across seven real-world datasets demonstrate consistent improvements over 13 state-of-the-art baselines, with accuracy gains ranging from 2% to 20%, validating both effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Hypergraphs provide a natural way to represent higher-order interactions among multiple entities. While undirected hypergraphs have been extensively studied, the case of directed hypergraphs, which can model oriented group interactions, remains largely under-explored despite its relevance for many applications. Recent approaches in this direction often exhibit an implicit bias toward homophily, which limits their effectiveness in heterophilic settings. Rooted in the algebraic topology notion of Cellular Sheaves, Sheaf Neural Networks (SNNs) were introduced as an effective solution to circumvent such a drawback. While a generalization to hypergraphs is known, it is only suitable for undirected hypergraphs, failing to tackle the directed case. In this work, we introduce Directional Sheaf Hypergraph Networks (DSHN), a framework integrating sheaf theory with a principled treatment of asymmetric relations within a hypergraph. From it, we construct the Directed Sheaf Hypergraph Laplacian, a complex-valued operator by which we unify and generalize many existing Laplacian matrices proposed in the graph- and hypergraph-learning literature. Across 7 real-world datasets and against 13 baselines, DSHN achieves relative accuracy gains from 2% up to 20%, showing how a principled treatment of directionality in hypergraphs, combined with the expressive power of sheaves, can substantially improve performance.
Problem

Research questions and friction points this paper is trying to address.

Modeling directed hypergraphs for oriented group interactions
Overcoming homophily bias in heterophilic hypergraph settings
Unifying Laplacian matrices for directed and undirected hypergraphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directional Sheaf Hypergraph Networks unify directed and undirected hypergraphs
It constructs a complex-valued Directed Sheaf Hypergraph Laplacian operator
The framework integrates sheaf theory with asymmetric hypergraph relations
🔎 Similar Papers