Dynamic Graph Condensation

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of large-scale data, high spatiotemporal redundancy, and exorbitant training costs in dynamic graph learning, this paper introduces Dynamic Graph Compression (DGC)—a novel task aiming to synthesize ultra-compact dynamic graphs that faithfully preserve the original graph’s spatiotemporal evolution. We propose a spike-based structural generation mechanism to model temporal connectivity, construct a dynamic graph state evolution field for fine-grained spatiotemporal distribution alignment, and design a differentiable graph synthesis framework enabling end-to-end optimization. Experiments demonstrate that the compressed graphs—containing only 0.5% of the original nodes and edges—retain 96.2% of downstream DGNN performance while accelerating training by up to 1846×. This approach significantly enhances data efficiency and scalability in dynamic graph learning.

Technology Category

Application Category

📝 Abstract
Recent research on deep graph learning has shifted from static to dynamic graphs, motivated by the evolving behaviors observed in complex real-world systems. However, the temporal extension in dynamic graphs poses significant data efficiency challenges, including increased data volume, high spatiotemporal redundancy, and reliance on costly dynamic graph neural networks (DGNNs). To alleviate the concerns, we pioneer the study of dynamic graph condensation (DGC), which aims to substantially reduce the scale of dynamic graphs for data-efficient DGNN training. Accordingly, we propose DyGC, a novel framework that condenses the real dynamic graph into a compact version while faithfully preserving the inherent spatiotemporal characteristics. Specifically, to endow synthetic graphs with realistic evolving structures, a novel spiking structure generation mechanism is introduced. It draws on the dynamic behavior of spiking neurons to model temporally-aware connectivity in dynamic graphs. Given the tightly coupled spatiotemporal dependencies, DyGC proposes a tailored distribution matching approach that first constructs a semantically rich state evolving field for dynamic graphs, and then performs fine-grained spatiotemporal state alignment to guide the optimization of the condensed graph. Experiments across multiple dynamic graph datasets and representative DGNN architectures demonstrate the effectiveness of DyGC. Notably, our method retains up to 96.2% DGNN performance with only 0.5% of the original graph size, and achieves up to 1846 times training speedup.
Problem

Research questions and friction points this paper is trying to address.

Reducing dynamic graph scale for efficient DGNN training
Preserving spatiotemporal characteristics in condensed dynamic graphs
Enhancing training speed and performance with minimal graph size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic graph condensation for efficient DGNN training
Spiking structure generation for evolving graph modeling
Fine-grained spatiotemporal state alignment optimization
D
Dong Chen
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
S
Shuai Zheng
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
Yeyu Yan
Yeyu Yan
北京交通大学
Graph neural network
Muhao Xu
Muhao Xu
PhD ShanDong university
Z
Zhenfeng Zhu
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
Y
Yao Zhao
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
Kunlun He
Kunlun He
Professor of medicine, Chinese PLA general hospital
Medical big datacardiology