Learning from Complexity: Exploring Dynamic Sample Pruning of Spatio-Temporal Training

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes ST-Prune, a novel approach that introduces a learning-complexity-based dynamic sample pruning mechanism into spatiotemporal forecasting. Addressing the inefficiency of conventional training paradigms that rely on redundant static datasets, ST-Prune adaptively selects high-informativeness samples by continuously evaluating the model’s learning state during training. This method breaks away from the traditional static data iteration framework, enabling significantly accelerated training across multiple real-world spatiotemporal datasets while maintaining or even improving predictive performance. The approach demonstrates strong generalizability and scalability, offering a more efficient and effective alternative for training spatiotemporal prediction models.

Technology Category

Application Category

📝 Abstract
Spatio-temporal forecasting is fundamental to intelligent systems in transportation, climate science, and urban planning. However, training deep learning models on the massive, often redundant, datasets from these domains presents a significant computational bottleneck. Existing solutions typically focus on optimizing model architectures or optimizers, while overlooking the inherent inefficiency of the training data itself. This conventional approach of iterating over the entire static dataset each epoch wastes considerable resources on easy-to-learn or repetitive samples. In this paper, we explore a novel training-efficiency techniques, namely learning from complexity with dynamic sample pruning, ST-Prune, for spatio-temporal forecasting. Through dynamic sample pruning, we aim to intelligently identify the most informative samples based on the model's real-time learning state, thereby accelerating convergence and improving training efficiency. Extensive experiments conducted on real-world spatio-temporal datasets show that ST-Prune significantly accelerates the training speed while maintaining or even improving the model performance, and it also has scalability and universality.
Problem

Research questions and friction points this paper is trying to address.

spatio-temporal forecasting
training efficiency
sample redundancy
computational bottleneck
dynamic sample pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic sample pruning
spatio-temporal forecasting
training efficiency
ST-Prune
learning from complexity
🔎 Similar Papers
No similar papers found.