🤖 AI Summary
Recent time-series forecasting research increasingly relies on complex architectures (e.g., Transformers, GNNs), yet their necessity remains empirically unverified. Method: This work proposes and rigorously evaluates a Simple Feedforward Neural Network (SFNN) as a strong baseline—employing univariate/multivariate input encoding and sliding-window modeling within a lightweight, reproducible evaluation framework. Critically, it redefines benchmarking protocols to enforce consistent data splitting, fair hyperparameter selection, and rigorous generalization assessment. Results: SFNN matches or surpasses state-of-the-art models across diverse multivariate and univariate forecasting benchmarks, while reducing parameter count by 1–2 orders of magnitude and accelerating training by 3–5×. Moreover, it demonstrates superior robustness to noise and distributional shift. Key contributions include: (i) establishing SFNN as a new, accessible baseline; (ii) challenging the assumed necessity of complex multivariate modeling; and (iii) advocating a more principled, reproducible evaluation paradigm for time-series forecasting.
📝 Abstract
Time series data are everywhere -- from finance to healthcare -- and each domain brings its own unique complexities and structures. While advanced models like Transformers and graph neural networks (GNNs) have gained popularity in time series forecasting, largely due to their success in tasks like language modeling, their added complexity is not always necessary. In our work, we show that simple feedforward neural networks (SFNNs) can achieve performance on par with, or even exceeding, these state-of-the-art models, while being simpler, smaller, faster, and more robust. Our analysis indicates that, in many cases, univariate SFNNs are sufficient, implying that modeling interactions between multiple series may offer only marginal benefits. Even when inter-series relationships are strong, a basic multivariate SFNN still delivers competitive results. We also examine some key design choices and offer guidelines on making informed decisions. Additionally, we critique existing benchmarking practices and propose an improved evaluation protocol. Although SFNNs may not be optimal for every situation (hence the ``almost'' in our title) they serve as a strong baseline that future time series forecasting methods should always be compared against.