🤖 AI Summary
To address the excessive memory and computational overhead in large-scale graph neural network (GNN) training, this paper proposes the first systematic framework integrating multi-scale graph representations. The method constructs hierarchical, multi-granularity graph structures via graph coarsening and introduces a coarse-to-fine training paradigm, subgraph-to-full-graph transfer strategy, and cross-scale gradient approximation mechanism—reducing computational complexity while preserving model accuracy. Experiments on multiple benchmark datasets demonstrate up to 40–65% memory reduction, 2.1–3.8× speedup in training time, and classification accuracy maintained or slightly improved. Crucially, this work is the first to deeply embed multi-scale graph representations across the entire GNN training pipeline, establishing a scalable and efficient paradigm for large-scale graph learning.
📝 Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and inferring from graph-structured data, and are widely used in a variety of applications, often considering large amounts of data and large graphs. However, training on such data requires large memory and extensive computations. In this paper, we introduce a novel framework for efficient multiscale training of GNNs, designed to integrate information across multiscale representations of a graph. Our approach leverages a hierarchical graph representation, taking advantage of coarse graph scales in the training process, where each coarse scale graph has fewer nodes and edges. Based on this approach, we propose a suite of GNN training methods: such as coarse-to-fine, sub-to-full, and multiscale gradient computation. We demonstrate the effectiveness of our methods on various datasets and learning tasks.