IGNN-Solver: A Graph Neural Solver for Implicit Graph Neural Networks

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Graph Neural Networks (IGNNs) effectively model long-range dependencies and mitigate over-smoothing, but their fixed-point iterative solvers suffer from high computational cost and poor scalability, hindering deployment on large-scale graphs. Method: We propose IGNN-Solver, the first dedicated solver for IGNNs, which (i) integrates generalized Anderson acceleration with a lightweight parameterized GNN for efficient iterative updates, and (ii) introduces a graph-structure-aware co-optimization framework combining sparsification and low-rank storage compression. Contribution/Results: Evaluated across multi-scale graph datasets, IGNN-Solver achieves 1.5×–8× inference speedup with zero accuracy loss. Crucially, acceleration scales positively with graph size—larger graphs yield higher speedups—thereby significantly enhancing the practical deployability of IGNNs in real-world large-scale scenarios.

Technology Category

Application Category

📝 Abstract
Implicit graph neural networks (IGNNs), which exhibit strong expressive power with a single layer, have recently demonstrated remarkable performance in capturing long-range dependencies (LRD) in underlying graphs while effectively mitigating the over-smoothing problem. However, IGNNs rely on computationally expensive fixed-point iterations, which lead to significant speed and scalability limitations, hindering their application to large-scale graphs. To achieve fast fixed-point solving for IGNNs, we propose a novel graph neural solver, IGNN-Solver, which leverages the generalized Anderson Acceleration method, parameterized by a tiny GNN, and learns iterative updates as a graph-dependent temporal process. To improve effectiveness on large-scale graph tasks, we further integrate sparsification and storage compression methods, specifically tailored for the IGNN-Solver, into its design. Extensive experiments demonstrate that the IGNN-Solver significantly accelerates inference on both small- and large-scale tasks, achieving a $1.5 imes$ to $8 imes$ speedup without sacrificing accuracy. This advantage becomes more pronounced as the graph scale grows, facilitating its large-scale deployment in real-world applications. The code to reproduce our results is available at https://github.com/landrarwolf/IGNN-Solver.
Problem

Research questions and friction points this paper is trying to address.

Accelerates IGNN fixed-point solving
Reduces computational cost for large graphs
Enhances scalability without accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized Anderson Acceleration method
Tiny GNN parameterization
Sparsification and storage compression
🔎 Similar Papers
No similar papers found.
J
Junchao Lin
School of EIC, Huazhong University of Science and Technology
Zenan Ling
Zenan Ling
Huazhong University of Science and Technology
random matrix theorydeep learning theory
Z
Zhanbo Feng
Department of Computer Science and Engineering, Shanghai Jiao Tong University
F
Feng Zhou
Center for Applied Statistics and School of Statistics, Renmin University of China
J
Jingwen Xu
School of EIC, Huazhong University of Science and Technology
Robert C. Qiu
Robert C. Qiu
Professor of Electrical Engineering, Tennessee Technological University
Deep LearningBig DataWireless CommunicationsSmart Grid