Adaptive Graph Coarsening for Efficient GNN Training

πŸ“… 2025-09-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the low training efficiency and high memory overhead of Graph Neural Networks (GNNs) on large-scale graphs, this paper proposes an end-to-end learnable adaptive graph coarsening method. Our approach jointly optimizes GNN parameters and node-merging policies during training, employing differentiable K-means clustering on dynamic node embeddings to achieve task-aware graph simplification. Unlike conventional coarsening methods relying on static structural or feature-based heuristics, ours is the first to enable *train-time learnable*, *heterophily-aware* dynamic coarsening. Experiments on both homophilic and heterophilic graph node classification tasks demonstrate that our method significantly reduces computational and memory costsβ€”by up to 5.3Γ—β€”while preserving or even improving classification accuracy. Visualization further confirms the downstream-task adaptivity of the learned clustering.

Technology Category

Application Category

πŸ“ Abstract
We propose an adaptive graph coarsening method to jointly learn graph neural network (GNN) parameters and merge nodes via K-means clustering during training. As real-world graphs grow larger, processing them directly becomes increasingly challenging and sometimes infeasible. Tailoring algorithms to large-scale data may sacrifice performance, so we instead consider graph reduction to decrease the amount of data used during training. In particular, we propose a method to simultaneously train a GNN and coarsen its graph by partitioning nodes via K-means clustering based on their embeddings. Unlike past graph coarsening works, our approach allows us to merge nodes during training. Not only does this preclude coarsening as a preprocessing step, but our node clusters can adapt to the learning task instead of relying solely on graph connectivity and features. Thus, our method is amenable to scenarios that are challenging for other methods, such as heterophilic data. We validate our approach on both homophilic and heterophilic node classification datasets. We further visualize relationships between node embeddings and their corresponding clusters to illustrate that our coarsened graph adapts to the learning task during training.
Problem

Research questions and friction points this paper is trying to address.

Adaptive graph coarsening for efficient GNN training
Jointly learns GNN parameters and merges nodes during training
Enables graph reduction adaptable to heterophilic and homophilic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive graph coarsening via K-means clustering
Joint learning of GNN parameters and node merging
Dynamic node clustering adapting to learning task
πŸ”Ž Similar Papers
No similar papers found.