ProGMLP: A Progressive Framework for GNN-to-MLP Knowledge Distillation with Efficient Trade-offs

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GNN-to-MLP (G2M) distillation methods lack dynamic trade-off capability between inference cost and accuracy, limiting adaptability to resource-constrained and heterogeneous deployment environments. This paper proposes the first on-demand adjustable G2M knowledge distillation framework that enables flexible precision-efficiency balancing. Its core contributions are: (1) a progressive knowledge distillation mechanism that transfers GNN knowledge in stages; (2) a multi-stage scalable MLP student architecture supporting configurable inference overhead; and (3) a progressive hybrid augmentation strategy integrating structural and feature-level enhancements to improve generalization. Extensive experiments across eight real-world graph datasets demonstrate that our method achieves substantial efficiency gains—average inference latency reduced by 62% and parameter count by 89%—while maintaining controlled accuracy degradation (average drop of only 1.3%). The framework thus effectively supports heterogeneous edge deployment with adaptive precision-efficiency tuning.

Technology Category

Application Category

📝 Abstract
GNN-to-MLP (G2M) methods have emerged as a promising approach to accelerate Graph Neural Networks (GNNs) by distilling their knowledge into simpler Multi-Layer Perceptrons (MLPs). These methods bridge the gap between the expressive power of GNNs and the computational efficiency of MLPs, making them well-suited for resource-constrained environments. However, existing G2M methods are limited by their inability to flexibly adjust inference cost and accuracy dynamically, a critical requirement for real-world applications where computational resources and time constraints can vary significantly. To address this, we introduce a Progressive framework designed to offer flexible and on-demand trade-offs between inference cost and accuracy for GNN-to-MLP knowledge distillation (ProGMLP). ProGMLP employs a Progressive Training Structure (PTS), where multiple MLP students are trained in sequence, each building on the previous one. Furthermore, ProGMLP incorporates Progressive Knowledge Distillation (PKD) to iteratively refine the distillation process from GNNs to MLPs, and Progressive Mixup Augmentation (PMA) to enhance generalization by progressively generating harder mixed samples. Our approach is validated through comprehensive experiments on eight real-world graph datasets, demonstrating that ProGMLP maintains high accuracy while dynamically adapting to varying runtime scenarios, making it highly effective for deployment in diverse application settings.
Problem

Research questions and friction points this paper is trying to address.

Enables flexible GNN-to-MLP accuracy-cost trade-offs dynamically
Improves distillation via progressive training and knowledge refinement
Enhances generalization with adaptive mixup augmentation strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Training Structure for sequential MLP training
Progressive Knowledge Distillation for iterative refinement
Progressive Mixup Augmentation for enhanced generalization
🔎 Similar Papers
No similar papers found.
W
Weigang Lu
School of Computer Science and Technology, Xidian University, Xi’an, China
Ziyu Guan
Ziyu Guan
Xidian University
Data miningmachine learningsocial media
W
Wei Zhao
School of Computer Science and Technology, Xidian University, Xi’an, China
Y
Yaming Yang
School of Computer Science and Technology, Xidian University, Xi’an, China
Yujie Sun
Yujie Sun
Professor, Department of Chemistry, University of Cincinnati
Inorganic ChemistryElectrochemistryPhotochemistry
Z
Zheng Liang
Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR
Yibing Zhan
Yibing Zhan
Unknown affiliation
Dapeng Tao
Dapeng Tao
Yunnan University