What Matters in Deep Learning for Time Series Forecasting?

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The proliferation of deep learning architectures for time-series forecasting and contradictory empirical results obscure the identification of critical design principles. Method: We propose the “Forecasting Model Card” framework—a novel systematic approach that maps architectures onto interpretable design principle dimensions (e.g., locality vs. globality in modeling, multi-series foundations, implementation-level choices) and rigorously quantifies how implementation biases affect method categorization and performance. Through cross-architecture ablation studies, critical benchmark evaluation, and standardized architectural representation, we isolate the impact of design decisions from incidental implementation details. Contribution/Results: We demonstrate that high-level design principles—not specific layer instantiations—are the primary determinants of performance; parsimonious, principle-driven models achieve competitive accuracy with state-of-the-art methods. Our work advocates a paradigm shift toward problem-centric analysis, design transparency, and rethinking of benchmarking practices in time-series forecasting.

Technology Category

Application Category

📝 Abstract
Deep learning models have grown increasingly popular in time series applications. However, the large quantity of newly proposed architectures, together with often contradictory empirical results, makes it difficult to assess which components contribute significantly to final performance. We aim to make sense of the current design space of deep learning architectures for time series forecasting by discussing the design dimensions and trade-offs that can explain, often unexpected, observed results. This paper discusses the necessity of grounding model design on principles for forecasting groups of time series and how such principles can be applied to current models. In particular, we assess how concepts such as locality and globality apply to recent forecasting architectures. We show that accounting for these aspects can be more relevant for achieving accurate results than adopting specific sequence modeling layers and that simple, well-designed forecasting architectures can often match the state of the art. We discuss how overlooked implementation details in existing architectures (1) fundamentally change the class of the resulting forecasting method and (2) drastically affect the observed empirical results. Our results call for rethinking current faulty benchmarking practices and the need to focus on the foundational aspects of the forecasting problem when designing architectures. As a step in this direction, we propose an auxiliary forecasting model card, whose fields serve to characterize existing and new forecasting architectures based on key design choices.
Problem

Research questions and friction points this paper is trying to address.

Assess deep learning components for time series forecasting performance
Ground model design on principles for forecasting groups of time series
Rethink benchmarking practices and focus on foundational forecasting aspects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focus on locality and globality principles
Simple well-designed architectures match state-of-the-art
Propose auxiliary forecasting model card for characterization
🔎 Similar Papers
No similar papers found.