🤖 AI Summary
No universally optimal model exists for time-series forecasting. Method: This paper proposes the first neural architecture selection framework based on multi-objective evolutionary optimization (MOEA) and Pareto optimality, integrating LSTM, GRU, multi-head attention, and state-space model (SSM) modules. It jointly optimizes accuracy, training efficiency, and other criteria, enabling user preference–driven customization. Contribution/Results: Departing from the “single-best-model” assumption, the framework establishes a dual-driven paradigm—guided by both data characteristics and user requirements—for generating composite architectures. Evaluated across four real-world datasets, it reveals that single-layer RNNs dominate under pure speed objectives, whereas cross-module composite architectures significantly outperform alternatives under accuracy- or balance-oriented objectives. Moreover, several novel, context-specific Pareto-optimal architectures are discovered, demonstrating the framework’s capacity to uncover domain-adapted solutions beyond conventional designs.
📝 Abstract
Time series forecasting plays a pivotal role in a wide range of applications, including weather prediction, healthcare, structural health monitoring, predictive maintenance, energy systems, and financial markets. While models such as LSTM, GRU, Transformers, and State-Space Models (SSMs) have become standard tools in this domain, selecting the optimal architecture remains a challenge. Performance comparisons often depend on evaluation metrics and the datasets under analysis, making the choice of a universally optimal model controversial. In this work, we introduce a flexible automated framework for time series forecasting that systematically designs and evaluates diverse network architectures by integrating LSTM, GRU, multi-head Attention, and SSM blocks. Using a multi-objective optimization approach, our framework determines the number, sequence, and combination of blocks to align with specific requirements and evaluation objectives. From the resulting Pareto-optimal architectures, the best model for a given context is selected via a user-defined preference function. We validate our framework across four distinct real-world applications. Results show that a single-layer GRU or LSTM is usually optimal when minimizing training time alone. However, when maximizing accuracy or balancing multiple objectives, the best architectures are often composite designs incorporating multiple block types in specific configurations. By employing a weighted preference function, users can resolve trade-offs between objectives, revealing novel, context-specific optimal architectures. Our findings underscore that no single neural architecture is universally optimal for time series forecasting. Instead, the best-performing model emerges as a data-driven composite architecture tailored to user-defined criteria and evaluation objectives.