Learning Accurate, Efficient, and Interpretable MLPs on Multiplex Graphs via Node-wise Multi-View Ensemble Distillation

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of multi-graph neural networks (MGNNs)—namely, slow inference due to neighbor aggregation, poor interpretability, and weak performance of MLP baselines—in latency-sensitive applications, this paper proposes the MGFNN family: lightweight, graph-structure-free MLP models. Our core method introduces a node-level multi-view ensemble distillation framework, featuring a novel node-adaptive multi-view integration mechanism. This mechanism employs low-rank reparameterization to learn node-specific view weights, enabling efficient knowledge transfer from MGNN teachers to compact MLP students. The approach substantially improves accuracy (+10% over standard MLP baselines), inference efficiency (35.4×–89.1× faster than teacher MGNNs), and interpretability—supporting node-level visualization of multi-view contribution scores.

Technology Category

Application Category

📝 Abstract
Multiplex graphs, with multiple edge types (graph views) among common nodes, provide richer structural semantics and better modeling capabilities. Multiplex Graph Neural Networks (MGNNs), typically comprising view-specific GNNs and a multi-view integration layer, have achieved advanced performance in various downstream tasks. However, their reliance on neighborhood aggregation poses challenges for deployment in latency-sensitive applications. Motivated by recent GNN-to-MLP knowledge distillation frameworks, we propose Multiplex Graph-Free Neural Networks (MGFNN and MGFNN+) to combine MGNNs' superior performance and MLPs' efficient inference via knowledge distillation. MGFNN directly trains student MLPs with node features as input and soft labels from teacher MGNNs as targets. MGFNN+ further employs a low-rank approximation-based reparameterization to learn node-wise coefficients, enabling adaptive knowledge ensemble from each view-specific GNN. This node-wise multi-view ensemble distillation strategy allows student MLPs to learn more informative multiplex semantic knowledge for different nodes. Experiments show that MGFNNs achieve average accuracy improvements of about 10% over vanilla MLPs and perform comparably or even better to teacher MGNNs (accurate); MGFNNs achieve a 35.40$ imes$-89.14$ imes$ speedup in inference over MGNNs (efficient); MGFNN+ adaptively assigns different coefficients for multi-view ensemble distillation regarding different nodes (interpretable).
Problem

Research questions and friction points this paper is trying to address.

Enhancing MLP efficiency on multiplex graphs
Improving accuracy through multi-view ensemble distillation
Enabling interpretable node-wise coefficient learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Node-wise multi-view ensemble distillation
Low-rank approximation-based reparameterization
Knowledge distillation from MGNNs to MLPs
🔎 Similar Papers
No similar papers found.