🤖 AI Summary
This work addresses the convergence and size transferability of continuous-depth Graph Neural Differential Equations (GNDEs) as graph sizes tend to infinity. To bridge the gap in existing theoretical foundations, we propose the Graphon Neural Differential Equation (Graphon-NDE) as a unified modeling framework and establish, for the first time, a trajectory convergence theory for GNDEs under the graphon limit, proving its well-posedness. Leveraging tools from graph theory, graphon analysis, and dynamical systems, we derive explicit convergence rates and size-transfer error bounds under two sampling schemes. Our theory guarantees uniform convergence of GNDE solutions to their Graphon-NDE counterparts. Empirical results confirm that GNDEs maintain performance across graphs of varying scales without retraining, significantly enhancing deployment efficiency. The core contribution is the first rigorous continuous-depth graph learning theory with provable convergence guarantees and quantifiable size-transfer errors.
📝 Abstract
Continuous-depth graph neural networks, also known as Graph Neural Differential Equations (GNDEs), combine the structural inductive bias of Graph Neural Networks (GNNs) with the continuous-depth architecture of Neural ODEs, offering a scalable and principled framework for modeling dynamics on graphs. In this paper, we present a rigorous convergence analysis of GNDEs with time-varying parameters in the infinite-node limit, providing theoretical insights into their size transferability. To this end, we introduce Graphon Neural Differential Equations (Graphon-NDEs) as the infinite-node limit of GNDEs and establish their well-posedness. Leveraging tools from graphon theory and dynamical systems, we prove the trajectory-wise convergence of GNDE solutions to Graphon-NDE solutions. Moreover, we derive explicit convergence rates under two deterministic graph sampling regimes: (1) weighted graphs sampled from smooth graphons, and (2) unweighted graphs sampled from ${0,1}$-valued (discontinuous) graphons. We further establish size transferability bounds, providing theoretical justification for the practical strategy of transferring GNDE models trained on moderate-sized graphs to larger, structurally similar graphs without retraining. Numerical experiments using synthetic and real data support our theoretical findings.