Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding

๐Ÿ“… 2025-05-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing topological deep learning models are restricted to undirected structures, limiting their capacity to model directed higher-order interactions prevalent in complex systems such as brain networks. To address this, we propose the first Semi-Simplicial Neural Network (SSN) explicitly designed for directed higher-order structures, built upon directed semi-simplicial sets to significantly enhance topological expressivity. We further introduce Routing-SSNโ€”a learnable routing mechanismโ€”to improve scalability. Moreover, we establish the first theoretically grounded representation learning framework provably capable of recovering essential topological features of brain networks. Empirically, our method achieves state-of-the-art performance on brain dynamics classification, outperforming the second-best model by up to 27% and conventional GNNs by up to 50%. It also demonstrates competitive performance on node classification and edge regression tasks. The code and datasets will be publicly released.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) excel at learning from pairwise interactions but often overlook multi-way and hierarchical relationships. Topological Deep Learning (TDL) addresses this limitation by leveraging combinatorial topological spaces. However, existing TDL models are restricted to undirected settings and fail to capture the higher-order directed patterns prevalent in many complex systems, e.g., brain networks, where such interactions are both abundant and functionally significant. To fill this gap, we introduce Semi-Simplicial Neural Networks (SSNs), a principled class of TDL models that operate on semi-simplicial sets -- combinatorial structures that encode directed higher-order motifs and their directional relationships. To enhance scalability, we propose Routing-SSNs, which dynamically select the most informative relations in a learnable manner. We prove that SSNs are strictly more expressive than standard graph and TDL models. We then introduce a new principled framework for brain dynamics representation learning, grounded in the ability of SSNs to provably recover topological descriptors shown to successfully characterize brain activity. Empirically, SSNs achieve state-of-the-art performance on brain dynamics classification tasks, outperforming the second-best model by up to 27%, and message passing GNNs by up to 50% in accuracy. Our results highlight the potential of principled topological models for learning from structured brain data, establishing a unique real-world case study for TDL. We also test SSNs on standard node classification and edge regression tasks, showing competitive performance. We will make the code and data publicly available.
Problem

Research questions and friction points this paper is trying to address.

Capturing directed higher-order patterns in complex systems
Enhancing scalability of topological deep learning models
Improving brain activity decoding with expressive neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

SSNs model directed higher-order brain interactions
Routing-SSNs dynamically select informative relations
SSNs outperform GNNs by up to 50%
๐Ÿ”Ž Similar Papers
No similar papers found.