Effective Dataset Distillation for Spatio-Temporal Forecasting with Bi-dimensional Compression

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes STemDist, the first dual-dimensional dataset distillation framework tailored for spatiotemporal forecasting. Existing methods typically compress only a single dimension—either temporal or spatial—limiting their efficiency in large-scale spatiotemporal sequence prediction. In contrast, STemDist jointly compresses both time and space dimensions by integrating cluster-level coarse-grained distillation with subset-level fine-grained optimization. This approach significantly reduces training overhead while simultaneously improving prediction accuracy. Extensive experiments on five real-world datasets demonstrate that STemDist achieves up to 6× faster training, 8× less memory consumption, and up to a 12% reduction in prediction error compared to state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Spatio-temporal time series are widely used in real-world applications, including traffic prediction and weather forecasting. They are sequences of observations over extensive periods and multiple locations, naturally represented as multidimensional data. Forecasting is a central task in spatio-temporal analysis, and numerous deep learning methods have been developed to address it. However, as dataset sizes and model complexities continue to grow in practice, training deep learning models has become increasingly time- and resource-intensive. A promising solution to this challenge is dataset distillation, which synthesizes compact datasets that can effectively replace the original data for model training. Although successful in various domains, including time series analysis, existing dataset distillation methods compress only one dimension, making them less suitable for spatio-temporal datasets, where both spatial and temporal dimensions jointly contribute to the large data volume. To address this limitation, we propose STemDist, the first dataset distillation method specialized for spatio-temporal time series forecasting. A key idea of our solution is to compress both temporal and spatial dimensions in a balanced manner, reducing training time and memory. We further reduce the distillation cost by performing distillation at the cluster level rather than the individual location level, and we complement this coarse-grained approach with a subset-based granular distillation technique that enhances forecasting performance. On five real-world datasets, we show empirically that, compared to both general and time-series dataset distillation methods, datasets distilled by our STemDist method enable model training (1) faster (up to 6X) (2) more memory-efficient (up to 8X), and (3) more effective (with up to 12% lower prediction error).
Problem

Research questions and friction points this paper is trying to address.

spatio-temporal forecasting
dataset distillation
bi-dimensional compression
time series
deep learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatio-temporal forecasting
dataset distillation
bi-dimensional compression
cluster-level distillation
time series
🔎 Similar Papers
No similar papers found.
Taehyung Kwon
Taehyung Kwon
Ph.D. Student at KAIST AI
Graph MiningTensor Mining
Y
Yeonje Choi
Kim Jaechul Graduate School of AI, KAIST, Seoul, Republic of Korea
Y
Yeongho Kim
Kim Jaechul Graduate School of AI, KAIST, Seoul, Republic of Korea
Kijung Shin
Kijung Shin
Associate Professor, KAIST
Data MiningGraph MiningNetwork Science