🤖 AI Summary
Existing spatiotemporal forecasting models suffer significant performance degradation when input-label pairs exhibit spatiotemporal discrepancies—such as similar inputs leading to divergent futures. To address this, this work proposes ReLearner, a bidirectional learning framework that explicitly models and smooths the spatiotemporal residual between inputs and labels by incorporating label features. Grounded in a newly established spatiotemporal residual theorem, ReLearner integrates residual decoupling and smoothing modules to extend the conventional unidirectional prediction paradigm into a bidirectional learning process, enabling seamless integration into diverse spatiotemporal neural architectures. Extensive experiments across 11 real-world datasets and 14 backbone models demonstrate that ReLearner consistently and substantially improves forecasting accuracy, confirming its generality and effectiveness.
📝 Abstract
Prevailing spatiotemporal prediction models typically operate under a forward (unidirectional) learning paradigm, in which models extract spatiotemporal features from historical observation input and map them to target spatiotemporal space for future forecasting (label). However, these models frequently exhibit suboptimal performance when spatiotemporal discrepancies exist between inputs and labels, for instance, when nodes with similar time-series inputs manifest distinct future labels, or vice versa. To address this limitation, we propose explicitly incorporating label features during the training phase. Specifically, we introduce the Spatiotemporal Residual Theorem, which generalizes the conventional unidirectional spatiotemporal prediction paradigm into a bidirectional learning framework. Building upon this theoretical foundation, we design an universal module, termed ReLearner, which seamlessly augments Spatiotemporal Neural Networks (STNNs) with a bidirectional learning capability via an auxiliary inverse learning process. In this process, the model relearns the spatiotemporal feature residuals between input data and future data. The proposed ReLearner comprises two critical components: (1) a Residual Learning Module, designed to effectively disentangle spatiotemporal feature discrepancies between input and label representations; and (2) a Residual Smoothing Module, employed to smooth residual terms and facilitate stable convergence. Extensive experiments conducted on 11 real-world datasets across 14 backbone models demonstrate that ReLearner significantly enhances the predictive performance of existing STNNs.Our code is available on GitHub.