MoSE: Unveiling Structural Patterns in Graphs via Mixture of Subgraph Experts

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) suffer from limited expressive power due to their reliance on local, pairwise message passing, hindering effective modeling of higher-order subgraph structures. Existing random-walk-based kernel methods are designed for graph-level tasks, exhibit poor generalization, and employ fixed kernel configurations, lacking flexibility in structural modeling. Method: We propose Mixture of Subgraph Experts (MoSE), a novel framework that extracts informative subgraphs via anonymous walks and employs a gated routing mechanism to dynamically assign them to semantically specialized subgraph experts, enabling flexible and interpretable higher-order structural modeling. Contribution/Results: MoSE is the first to integrate subgraph expert ensembling with dynamic routing into multi-task graph learning. It theoretically surpasses the Subgraph Weisfeiler-Lehman (SWL) test in expressive power. Empirically, MoSE achieves significant improvements over state-of-the-art methods on node and graph classification tasks, while providing strong structural interpretability.

Technology Category

Application Category

📝 Abstract
While graph neural networks (GNNs) have achieved great success in learning from graph-structured data, their reliance on local, pairwise message passing restricts their ability to capture complex, high-order subgraph patterns. leading to insufficient structural expressiveness. Recent efforts have attempted to enhance structural expressiveness by integrating random walk kernels into GNNs. However, these methods are inherently designed for graph-level tasks, which limits their applicability to other downstream tasks such as node classification. Moreover, their fixed kernel configurations hinder the model's flexibility in capturing diverse subgraph structures. To address these limitations, this paper proposes a novel Mixture of Subgraph Experts (MoSE) framework for flexible and expressive subgraph-based representation learning across diverse graph tasks. Specifically, MoSE extracts informative subgraphs via anonymous walks and dynamically routes them to specialized experts based on structural semantics, enabling the model to capture diverse subgraph patterns with improved flexibility and interpretability. We further provide a theoretical analysis of MoSE's expressivity within the Subgraph Weisfeiler-Lehman (SWL) Test, proving that it is more powerful than SWL. Extensive experiments, together with visualizations of learned subgraph experts, demonstrate that MoSE not only outperforms competitive baselines but also provides interpretable insights into structural patterns learned by the model.
Problem

Research questions and friction points this paper is trying to address.

Enhancing structural expressiveness beyond local message passing
Overcoming fixed kernel limitations for diverse subgraph patterns
Enabling flexible subgraph representation across diverse graph tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Subgraph Experts framework
Anonymous walks extract informative subgraphs
Dynamic routing based on structural semantics
🔎 Similar Papers
No similar papers found.