🤖 AI Summary
Manual architecture design and inefficient module composition hinder performance and efficiency in time-series forecasting. Method: This paper proposes Hierarchical Neural Architecture Search (HNAS), a framework that automates model discovery for long-horizon forecasting. Contribution/Results: HNAS introduces (1) a novel hierarchical search space that unifies heterogeneous architectures—including CNNs, RNNs, and Transformers—enabling efficient cross-module composition and task-adaptive search; and (2) a lightweight, differentiable optimization strategy integrated with a modular search space tailored for long-term forecasting. Evaluated on multiple long-horizon benchmarks, models discovered by HNAS reduce parameter count by 30–50% while achieving significantly lower prediction errors than state-of-the-art manually designed baselines. These results demonstrate HNAS’s capability to systematically unlock the potential of deep learning modules through principled, automated architecture discovery.
📝 Abstract
The rapid development of time series forecasting research has brought many deep learning-based modules in this field. However, despite the increasing amount of new forecasting architectures, it is still unclear if we have leveraged the full potential of these existing modules within a properly designed architecture. In this work, we propose a novel hierarchical neural architecture search approach for time series forecasting tasks. With the design of a hierarchical search space, we incorporate many architecture types designed for forecasting tasks and allow for the efficient combination of different forecasting architecture modules. Results on long-term-time-series-forecasting tasks show that our approach can search for lightweight high-performing forecasting architectures across different forecasting tasks.