🤖 AI Summary
To address the challenges of large-scale data, high spatiotemporal redundancy, and exorbitant training costs in dynamic graph learning, this paper introduces Dynamic Graph Compression (DGC)—a novel task aiming to synthesize ultra-compact dynamic graphs that faithfully preserve the original graph’s spatiotemporal evolution. We propose a spike-based structural generation mechanism to model temporal connectivity, construct a dynamic graph state evolution field for fine-grained spatiotemporal distribution alignment, and design a differentiable graph synthesis framework enabling end-to-end optimization. Experiments demonstrate that the compressed graphs—containing only 0.5% of the original nodes and edges—retain 96.2% of downstream DGNN performance while accelerating training by up to 1846×. This approach significantly enhances data efficiency and scalability in dynamic graph learning.
📝 Abstract
Recent research on deep graph learning has shifted from static to dynamic graphs, motivated by the evolving behaviors observed in complex real-world systems. However, the temporal extension in dynamic graphs poses significant data efficiency challenges, including increased data volume, high spatiotemporal redundancy, and reliance on costly dynamic graph neural networks (DGNNs). To alleviate the concerns, we pioneer the study of dynamic graph condensation (DGC), which aims to substantially reduce the scale of dynamic graphs for data-efficient DGNN training. Accordingly, we propose DyGC, a novel framework that condenses the real dynamic graph into a compact version while faithfully preserving the inherent spatiotemporal characteristics. Specifically, to endow synthetic graphs with realistic evolving structures, a novel spiking structure generation mechanism is introduced. It draws on the dynamic behavior of spiking neurons to model temporally-aware connectivity in dynamic graphs. Given the tightly coupled spatiotemporal dependencies, DyGC proposes a tailored distribution matching approach that first constructs a semantically rich state evolving field for dynamic graphs, and then performs fine-grained spatiotemporal state alignment to guide the optimization of the condensed graph. Experiments across multiple dynamic graph datasets and representative DGNN architectures demonstrate the effectiveness of DyGC. Notably, our method retains up to 96.2% DGNN performance with only 0.5% of the original graph size, and achieves up to 1846 times training speedup.