🤖 AI Summary
Time-series forecasting suffers from poor generalization and low efficiency due to tight coupling among sequence representation, information extraction, and future projection. To address this, we propose a modular forecasting framework that decouples the pipeline into three independently optimizable stages—representation learning, information extraction, and target projection—enabling flexible, task-aware component configuration. Our approach innovatively integrates convolutional layers with a lightweight self-attention mechanism, achieving efficient local feature modeling while capturing long-range temporal dependencies. Evaluated on seven benchmark datasets, the method consistently outperforms existing state-of-the-art models in prediction accuracy, while requiring significantly fewer parameters and achieving faster training and inference speeds. This demonstrates substantial improvements in both statistical performance and computational efficiency.
📝 Abstract
With the advent of Transformers, time series forecasting has seen significant advances, yet it remains challenging due to the need for effective sequence representation, memory construction, and accurate target projection. Time series forecasting remains a challenging task, demanding effective sequence representation, meaningful information extraction, and precise future projection. Each dataset and forecasting configuration constitutes a distinct task, each posing unique challenges the model must overcome to produce accurate predictions. To systematically address these task-specific difficulties, this work decomposes the time series forecasting pipeline into three core stages: input sequence representation, information extraction and memory construction, and final target projection. Within each stage, we investigate a range of architectural configurations to assess the effectiveness of various modules, such as convolutional layers for feature extraction and self-attention mechanisms for information extraction, across diverse forecasting tasks, including evaluations on seven benchmark datasets. Our models achieve state-of-the-art forecasting accuracy while greatly enhancing computational efficiency, with reduced training and inference times and a lower parameter count. The source code is available at https://github.com/RobertLeppich/REP-Net.