🤖 AI Summary
Existing Sheaf Neural Networks (SNNs) struggle to model edge directionality in graph-structured data, limiting their performance on direction-sensitive tasks. To address this, we propose Directed Cellular Sheaves and their associated Directed Layer Laplacians—first explicitly encoding edge orientation into the sheaf structure—and introduce Directional Sheaf Neural Networks (DSNNs), the first sheaf-based neural architecture incorporating directional priors. Our method integrates directed cellular theory, spectral design of layer Laplacians, and graph signal processing to enable hierarchical, direction-aware feature propagation. Evaluated on nine real-world graph benchmarks, DSNN consistently outperforms state-of-the-art GNNs and SNNs, achieving particularly substantial gains on direction-sensitive tasks such as link prediction and oriented graph classification. This work establishes a new geometric deep learning paradigm explicitly tailored to relational directionality in graphs.
📝 Abstract
Sheaf Neural Networks (SNNs) represent a powerful generalization of Graph Neural Networks (GNNs) that significantly improve our ability to model complex relational data. While directionality has been shown to substantially boost performance in graph learning tasks and is key to many real-world applications, existing SNNs fall short in representing it. To address this limitation, we introduce the Directed Cellular Sheaf, a special type of cellular sheaf designed to explicitly account for edge orientation. Building on this structure, we define a new sheaf Laplacian, the Directed Sheaf Laplacian, which captures both the graph's topology and its directional information. This operator serves as the backbone of the Directed Sheaf Neural Network (DSNN), the first SNN model to embed a directional bias into its architecture. Extensive experiments on nine real-world benchmarks show that DSNN consistently outperforms baseline methods.