A Few Moments Please: Scalable Graphon Learning via Moment Matching

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graphon estimation methods for large-scale graphs suffer from poor scalability and suboptimal resolution-agnostic approximation, primarily due to reliance on latent variable inference or computationally expensive optimal transport computations (e.g., Gromov–Wasserstein distance). Method: We propose a moment-matching-based direct graphon estimation framework that jointly models implicit neural representations (INRs) and subgraph moment statistics—bypassing latent variable learning and costly distance calculations. We theoretically establish convergence in the cut distance and introduce MomentMixup, a momentum-space data augmentation strategy to enhance generalization. Results: Our method outperforms existing scalable graphon estimators in both accuracy and efficiency on 75% of benchmarks, significantly improves downstream graph classification accuracy, achieves polynomial-time complexity, and enables efficient modeling of million-node graphs.

Technology Category

Application Category

📝 Abstract
Graphons, as limit objects of dense graph sequences, play a central role in the statistical analysis of network data. However, existing graphon estimation methods often struggle with scalability to large networks and resolution-independent approximation, due to their reliance on estimating latent variables or costly metrics such as the Gromov-Wasserstein distance. In this work, we propose a novel, scalable graphon estimator that directly recovers the graphon via moment matching, leveraging implicit neural representations (INRs). Our approach avoids latent variable modeling by training an INR--mapping coordinates to graphon values--to match empirical subgraph counts (i.e., moments) from observed graphs. This direct estimation mechanism yields a polynomial-time solution and crucially sidesteps the combinatorial complexity of Gromov-Wasserstein optimization. Building on foundational results, we establish a theoretical guarantee: when the observed subgraph motifs sufficiently represent those of the true graphon (a condition met with sufficiently large or numerous graph samples), the estimated graphon achieves a provable upper bound in cut distance from the ground truth. Additionally, we introduce MomentMixup, a data augmentation technique that performs mixup in the moment space to enhance graphon-based learning. Our graphon estimation method achieves strong empirical performance--demonstrating high accuracy on small graphs and superior computational efficiency on large graphs--outperforming state-of-the-art scalable estimators in 75% of benchmark settings and matching them in the remaining cases. Furthermore, MomentMixup demonstrated improved graph classification accuracy on the majority of our benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Scalable graphon estimation for large networks
Resolution-independent approximation without latent variables
Polynomial-time solution avoiding Gromov-Wasserstein complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses moment matching for graphon estimation
Leverages implicit neural representations (INRs)
Introduces MomentMixup for data augmentation
🔎 Similar Papers
No similar papers found.