🤖 AI Summary
This work addresses the high computational redundancy and complexity in graph learning by proposing a graph sparsification framework grounded in Zero Forcing (ZF) dynamics, yielding a “learning backbone graph” that preserves essential learnability properties. It establishes, for the first time, a theoretical linkage between ZF processes and graph learnability, using dynamic controllability as a principled criterion for structural simplification. The method introduces a tree-structured backbone construction paradigm and incorporates node-distance-weighted pruning to enhance robustness. Extensive experiments across eight benchmark graph datasets and six representative baseline models demonstrate that the approach significantly improves inference efficiency while achieving, on average, higher classification accuracy than state-of-the-art sparsification methods—thereby validating its effectiveness and generalizability.
📝 Abstract
This paper introduces a novel framework for graph sparsification that preserves the essential learning attributes of original graphs, improving computational efficiency and reducing complexity in learning algorithms. We refer to these sparse graphs as"learning backbones". Our approach leverages the zero-forcing (ZF) phenomenon, a dynamic process on graphs with applications in network control. The key idea is to generate a tree from the original graph that retains critical dynamical properties. By correlating these properties with learning attributes, we construct effective learning backbones. We evaluate the performance of our ZF-based backbones in graph classification tasks across eight datasets and six baseline models. The results demonstrate that our method outperforms existing techniques. Additionally, we explore extensions using node distance metrics to further enhance the framework's utility.