Towards Efficient Training of Graph Neural Networks: A Multiscale Approach

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive memory and computational overhead in large-scale graph neural network (GNN) training, this paper proposes the first systematic framework integrating multi-scale graph representations. The method constructs hierarchical, multi-granularity graph structures via graph coarsening and introduces a coarse-to-fine training paradigm, subgraph-to-full-graph transfer strategy, and cross-scale gradient approximation mechanism—reducing computational complexity while preserving model accuracy. Experiments on multiple benchmark datasets demonstrate up to 40–65% memory reduction, 2.1–3.8× speedup in training time, and classification accuracy maintained or slightly improved. Crucially, this work is the first to deeply embed multi-scale graph representations across the entire GNN training pipeline, establishing a scalable and efficient paradigm for large-scale graph learning.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and inferring from graph-structured data, and are widely used in a variety of applications, often considering large amounts of data and large graphs. However, training on such data requires large memory and extensive computations. In this paper, we introduce a novel framework for efficient multiscale training of GNNs, designed to integrate information across multiscale representations of a graph. Our approach leverages a hierarchical graph representation, taking advantage of coarse graph scales in the training process, where each coarse scale graph has fewer nodes and edges. Based on this approach, we propose a suite of GNN training methods: such as coarse-to-fine, sub-to-full, and multiscale gradient computation. We demonstrate the effectiveness of our methods on various datasets and learning tasks.
Problem

Research questions and friction points this paper is trying to address.

Efficient training of Graph Neural Networks
Reducing memory and computation requirements
Multiscale graph representation integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiscale graph representation for efficient training
Hierarchical coarse-to-fine GNN training methods
Multiscale gradient computation reduces resource usage
🔎 Similar Papers
No similar papers found.
E
Eshed Gal
Department of Computer Science, Ben-Gurion University of the Negev
Moshe Eliasof
Moshe Eliasof
University of Cambridge
C
Carola-Bibiane Schonlieb
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Eldad Haber
Eldad Haber
Professor of Mathematics and Geophysics UBC
Computational Science
Eran Treister
Eran Treister
Computer Science Dept at Ben-Gurion University
Scientific computingMultigrid methodsOptimization methodsInverse problemsMachine Learning