HOPSE: Scalable Higher-Order Positional and Structural Encoder for Combinatorial Representations

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural networks (GNNs) struggle to model higher-order, multi-directional relationships prevalent in real-world systems. While topological deep learning (TDL) leverages structures such as simplicial complexes to support higher-order interactions, mainstream higher-order message passing (HOMP) methods suffer from combinatorial path explosion and computational redundancy, severely limiting scalability. This paper introduces the first **message-passing-free higher-order encoding framework**, achieving linear time complexity via Hasse graph decomposition while preserving both expressive power and permutation equivariance. Key innovations include: (i) hierarchical decoupling of the Hasse graph, (ii) encoding of higher-order compositional structures, and (iii) joint positional-structural embedding. Empirically, our method matches or surpasses state-of-the-art models on molecular property prediction and topological benchmarks, with up to 7× faster inference speed.

Technology Category

Application Category

📝 Abstract
While Graph Neural Networks (GNNs) have proven highly effective at modeling relational data, pairwise connections cannot fully capture multi-way relationships naturally present in complex real-world systems. In response to this, Topological Deep Learning (TDL) leverages more general combinatorial representations -- such as simplicial or cellular complexes -- to accommodate higher-order interactions. Existing TDL methods often extend GNNs through Higher-Order Message Passing (HOMP), but face critical emph{scalability challenges} due to extit{(i)} a combinatorial explosion of message-passing routes, and extit{(ii)} significant complexity overhead from the propagation mechanism. To overcome these limitations, we propose HOPSE (Higher-Order Positional and Structural Encoder) -- a emph{message passing-free} framework that uses Hasse graph decompositions to derive efficient and expressive encodings over emph{arbitrary higher-order domains}. Notably, HOPSE scales linearly with dataset size while preserving expressive power and permutation equivariance. Experiments on molecular, expressivity and topological benchmarks show that HOPSE matches or surpasses state-of-the-art performance while achieving up to 7 $times$ speedups over HOMP-based models, opening a new path for scalable TDL.
Problem

Research questions and friction points this paper is trying to address.

Addressing scalability in higher-order graph representations
Eliminating combinatorial explosion in message-passing routes
Reducing complexity in topological deep learning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hasse graph decompositions for encoding
Eliminates message-passing for scalability
Scales linearly with dataset size
🔎 Similar Papers
No similar papers found.