Semi-Supervised Learning on Graphs using Graph Neural Networks

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous theoretical justification for the empirical success of graph neural networks in semi-supervised node regression. We analyze an aggregation-readout model that employs linear graph convolution for feature propagation and a deep ReLU readout function for least-squares estimation. For the first time, we establish a non-asymptotic risk bound that explicitly decouples approximation error, stochastic error, and optimization error, thereby revealing how performance depends on the labeling ratio and graph structure. Our theory shows that the method recovers the classical nonparametric convergence rate under full supervision and retains provable guarantees even in the label-scarce regime. Numerical experiments corroborate the theoretical predictions, demonstrating the validity of our analysis.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) work remarkably well in semi-supervised node regression, yet a rigorous theory explaining when and why they succeed remains lacking. To address this gap, we study an aggregate-and-readout model that encompasses several common message passing architectures: node features are first propagated over the graph then mapped to responses via a nonlinear function. For least-squares estimation over GNNs with linear graph convolutions and a deep ReLU readout, we prove a sharp non-asymptotic risk bound that separates approximation, stochastic, and optimization errors. The bound makes explicit how performance scales with the fraction of labeled nodes and graph-induced dependence. Approximation guarantees are further derived for graph-smoothing followed by smooth nonlinear readouts, yielding convergence rates that recover classical nonparametric behavior under full supervision while characterizing performance when labels are scarce. Numerical experiments validate our theory, providing a systematic framework for understanding GNN performance and limitations.
Problem

Research questions and friction points this paper is trying to address.

Semi-Supervised Learning
Graph Neural Networks
Theoretical Analysis
Node Regression
Risk Bound
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Networks
Semi-Supervised Learning
Non-Asymptotic Risk Bound
Aggregate-and-Readout Model
Graph Smoothing
🔎 Similar Papers
No similar papers found.