Nonparametric Teaching for Graph Property Learners

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs), such as GCNs, suffer from low training efficiency and difficulty in simultaneously achieving strong generalization. Method: This paper proposes Graph Neural Teaching (GraNT), a novel paradigm that establishes the first theoretical consistency between GNN training and structure-aware nonparametric teaching. GraNT introduces a graph-structure-sensitive example selection mechanism, enabling efficient, parameter-free implicit teaching. It integrates functional gradient descent, graph structural modeling, nonparametric teaching theory, and GCN training dynamics analysis. Contribution/Results: GraNT pioneers the integration of nonparametric teaching into graph learning, uncovers the structural-sensitivity-driven teaching principle, and delivers a plug-and-play, high-efficiency training framework. Extensive experiments on multiple graph- and node-level regression and classification tasks demonstrate 30.97%–47.30% reduction in training time without compromising generalization performance.

Technology Category

Application Category

📝 Abstract
Inferring properties of graph-structured data, e.g., the solubility of molecules, essentially involves learning the implicit mapping from graphs to their properties. This learning process is often costly for graph property learners like Graph Convolutional Networks (GCNs). To address this, we propose a paradigm called Graph Neural Teaching (GraNT) that reinterprets the learning process through a novel nonparametric teaching perspective. Specifically, the latter offers a theoretical framework for teaching implicitly defined (i.e., nonparametric) mappings via example selection. Such an implicit mapping is realized by a dense set of graph-property pairs, with the GraNT teacher selecting a subset of them to promote faster convergence in GCN training. By analytically examining the impact of graph structure on parameter-based gradient descent during training, and recasting the evolution of GCNs--shaped by parameter updates--through functional gradient descent in nonparametric teaching, we show for the first time that teaching graph property learners (i.e., GCNs) is consistent with teaching structure-aware nonparametric learners. These new findings readily commit GraNT to enhancing learning efficiency of the graph property learner, showing significant reductions in training time for graph-level regression (-36.62%), graph-level classification (-38.19%), node-level regression (-30.97%) and node-level classification (-47.30%), all while maintaining its generalization performance.
Problem

Research questions and friction points this paper is trying to address.

Teaching graph property learners efficiently via example selection
Reducing training time for graph and node-level tasks
Maintaining generalization performance while enhancing learning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonparametric teaching for graph property learning
Graph Neural Teaching (GraNT) selects graph-property pairs
Functional gradient descent enhances GCN training efficiency
🔎 Similar Papers
No similar papers found.
C
Chen Zhang
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China
W
Weixin Bu
Reversible Inc.
Zeyi Ren
Zeyi Ren
MPhil, The University of Hong Kong
Model-driven Deep LearningWireless CommunicationsAutonomous Driving
Zhengwu Liu
Zhengwu Liu
The University of Hong Kong (HKU) / Tsinghua University (THU)
brain machine interfacescomputing in memorymemristor
Y
Yik-Chung Wu
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China
N
Ngai Wong
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China