🤖 AI Summary
To address the high redundancy and excessive computational overhead of Asynchronous Recurrent Graph Neural Networks (ARGNNs), this paper proposes a graph-spectral-theory-based dynamic edge pruning method. It introduces, for the first time, the imaginary part of the Laplacian matrix’s complex eigenvalues into ARGNN structural optimization, establishing a theoretical link between this spectral quantity and the stability of asynchronous message passing—enabling spectral-aware, adaptive edge sparsification. The method synergistically integrates data-driven learning with spectral analysis, achieving 37–52% parameter reduction, 41% inference latency reduction, and <0.8% accuracy degradation across multiple dynamic graph benchmarks—substantially outperforming conventional structural pruning and gradient-based approaches. The core contribution lies in uncovering the mechanistic influence of complex spectral properties on asynchronous recurrent dynamics, and leveraging this insight to design an interpretable, efficient dynamic pruning paradigm.
📝 Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning on graph-structured data, finding applications in numerous domains including social network analysis and molecular biology. Within this broad category, Asynchronous Recurrent Graph Neural Networks (ARGNNs) stand out for their ability to capture complex dependencies in dynamic graphs, resembling living organisms' intricate and adaptive nature. However, their complexity often leads to large and computationally expensive models. Therefore, pruning unnecessary edges becomes crucial for enhancing efficiency without significantly compromising performance. This paper presents a dynamic pruning method based on graph spectral theory, leveraging the imaginary component of the eigenvalues of the network graph's Laplacian.