Automatic selection of the best neural architecture for time series forecasting via multi-objective optimization and Pareto optimality conditions

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
No universally optimal model exists for time-series forecasting. Method: This paper proposes the first neural architecture selection framework based on multi-objective evolutionary optimization (MOEA) and Pareto optimality, integrating LSTM, GRU, multi-head attention, and state-space model (SSM) modules. It jointly optimizes accuracy, training efficiency, and other criteria, enabling user preference–driven customization. Contribution/Results: Departing from the “single-best-model” assumption, the framework establishes a dual-driven paradigm—guided by both data characteristics and user requirements—for generating composite architectures. Evaluated across four real-world datasets, it reveals that single-layer RNNs dominate under pure speed objectives, whereas cross-module composite architectures significantly outperform alternatives under accuracy- or balance-oriented objectives. Moreover, several novel, context-specific Pareto-optimal architectures are discovered, demonstrating the framework’s capacity to uncover domain-adapted solutions beyond conventional designs.

Technology Category

Application Category

📝 Abstract
Time series forecasting plays a pivotal role in a wide range of applications, including weather prediction, healthcare, structural health monitoring, predictive maintenance, energy systems, and financial markets. While models such as LSTM, GRU, Transformers, and State-Space Models (SSMs) have become standard tools in this domain, selecting the optimal architecture remains a challenge. Performance comparisons often depend on evaluation metrics and the datasets under analysis, making the choice of a universally optimal model controversial. In this work, we introduce a flexible automated framework for time series forecasting that systematically designs and evaluates diverse network architectures by integrating LSTM, GRU, multi-head Attention, and SSM blocks. Using a multi-objective optimization approach, our framework determines the number, sequence, and combination of blocks to align with specific requirements and evaluation objectives. From the resulting Pareto-optimal architectures, the best model for a given context is selected via a user-defined preference function. We validate our framework across four distinct real-world applications. Results show that a single-layer GRU or LSTM is usually optimal when minimizing training time alone. However, when maximizing accuracy or balancing multiple objectives, the best architectures are often composite designs incorporating multiple block types in specific configurations. By employing a weighted preference function, users can resolve trade-offs between objectives, revealing novel, context-specific optimal architectures. Our findings underscore that no single neural architecture is universally optimal for time series forecasting. Instead, the best-performing model emerges as a data-driven composite architecture tailored to user-defined criteria and evaluation objectives.
Problem

Research questions and friction points this paper is trying to address.

Time Series Prediction
Neural Network Architectures
Domain-specific Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective Optimization
Pareto Optimality
Neural Network Ensemble
🔎 Similar Papers
No similar papers found.
Q
Qianying Cao
Division of Applied Mathematics, Brown University, Providence, RI 02906, U.S.A.
Shanqing Liu
Shanqing Liu
Division of Applied Mathematics, Brown University
Optimal ControlHJB equationNumerical Methods
A
Alan John Varghese
School of Engineering, Brown University, Providence, RI 02906, U.S.A.
J
Jerome Darbon
Division of Applied Mathematics, Brown University, Providence, RI 02906, U.S.A.
M
Michael Triantafyllou
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A.
George Em Karniadakis
George Em Karniadakis
The Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics and Engineering
Math+Machine LearningProbabilistic Scientific ComputingStochastic Multiscale Modeling