Test-Time Efficient Pretrained Model Portfolios for Time Series Forecasting

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In time series forecasting, a single large foundation model is not necessarily optimal. This paper proposes an “ensemble of expert models” paradigm: first pretraining a set of lightweight general-purpose models, then specializing them via post-training to yield diverse expert models, and finally employing ensemble or model selection strategies at inference time. This approach avoids the high parameter count and computational overhead inherent in monolithic large models, significantly improving both inference efficiency and generalization capability. Experiments on large-scale benchmarks demonstrate that our method achieves performance comparable to—or even surpassing—that of large models, using substantially fewer parameters; inference cost is reduced by up to several-fold, while maintaining strong scalability. The core contribution lies in empirically validating the superiority of specialized small-model ensembles in the accuracy–efficiency trade-off, and in establishing that model ensembling consistently outperforms test-time fine-tuning.

Technology Category

Application Category

📝 Abstract
Is bigger always better for time series foundation models? With the question in mind, we explore an alternative to training a single, large monolithic model: building a portfolio of smaller, pretrained forecasting models. By applying ensembling or model selection over these portfolios, we achieve competitive performance on large-scale benchmarks using much fewer parameters. We explore strategies for designing such portfolios and find that collections of specialist models consistently outperform portfolios of independently trained generalists. Remarkably, we demonstrate that post-training a base model is a compute-effective approach for creating sufficiently diverse specialists, and provide evidences that ensembling and model selection are more compute-efficient than test-time fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Exploring smaller pretrained model portfolios for time series forecasting
Achieving competitive performance with fewer parameters via ensembling
Demonstrating compute-efficient specialist creation and selection strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Portfolio of smaller pretrained forecasting models
Ensembling or model selection over specialist collections
Post-training base model for diverse specialists
🔎 Similar Papers
No similar papers found.