🤖 AI Summary
This work challenges the prevailing assumption that larger models are inherently superior for time-series forecasting, questioning the universality of scaling laws in this domain. To address this, we propose Alinear—a hyper-lightweight model that abandons attention mechanisms entirely. Alinear introduces two novel components: (i) horizon-aware temporal adaptive decomposition, which dynamically disentangles trend and seasonal components according to forecast horizon, and (ii) progressive frequency attenuation, which selectively suppresses high-frequency noise while preserving predictive signals. These are integrated with efficient linear modeling for scalable, multi-horizon forecasting. Furthermore, we design a parameter-aware evaluation framework that quantifies the differential contributions of trend and seasonal components to overall prediction accuracy. Evaluated on seven benchmark datasets, Alinear achieves state-of-the-art performance across short- to ultra-long-term horizons—despite using less than 1% of the parameters of leading large models—demonstrating both the feasibility and superiority of compact, principled modeling in time-series forecasting.
📝 Abstract
Rapid expansion of model size has emerged as a key challenge in time series forecasting. From early Transformer with tens of megabytes to recent architectures like TimesNet with thousands of megabytes, performance gains have often come at the cost of exponentially increasing parameter counts. But is this scaling truly necessary? To question the applicability of the scaling law in time series forecasting, we propose Alinear, an ultra-lightweight forecasting model that achieves competitive performance using only k-level parameters. We introduce a horizon-aware adaptive decomposition mechanism that dynamically rebalances component emphasis across different forecast lengths, alongside a progressive frequency attenuation strategy that achieves stable prediction in various forecasting horizons without incurring the computational overhead of attention mechanisms. Extensive experiments on seven benchmark datasets demonstrate that Alinear consistently outperforms large-scale models while using less than 1% of their parameters, maintaining strong accuracy across both short and ultra-long forecasting horizons. Moreover, to more fairly evaluate model efficiency, we propose a new parameter-aware evaluation metric that highlights the superiority of ALinear under constrained model budgets. Our analysis reveals that the relative importance of trend and seasonal components varies depending on data characteristics rather than following a fixed pattern, validating the necessity of our adaptive design. This work challenges the prevailing belief that larger models are inherently better and suggests a paradigm shift toward more efficient time series modeling.