Optimal Look-back Horizon for Time Series Forecasting in Federated Learning

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adaptive selection of the look-back horizon for time-series forecasting in federated learning remains challenging due to data decentralization, non-IID distributions, and client heterogeneity. Method: We propose the first intrinsic representation space adaptation framework tailored for federated settings. Our approach models temporal structural heterogeneity via a synthetic data generator, integrates intrinsic-space mapping, Bayesian error decomposition, and geometric statistical modeling, and achieves optimal trade-offs among prediction errors under decentralized constraints. Contribution/Results: We theoretically prove that the optimal look-back horizon corresponds to the minimal point where the irreducible loss saturates. Empirically, our method significantly improves both forecasting accuracy and communication efficiency across diverse heterogeneous time-series tasks, outperforming existing baselines in federated forecasting benchmarks.

Technology Category

Application Category

📝 Abstract
Selecting an appropriate look-back horizon remains a fundamental challenge in time series forecasting (TSF), particularly in the federated learning scenarios where data is decentralized, heterogeneous, and often non-independent. While recent work has explored horizon selection by preserving forecasting-relevant information in an intrinsic space, these approaches are primarily restricted to centralized and independently distributed settings. This paper presents a principled framework for adaptive horizon selection in federated time series forecasting through an intrinsic space formulation. We introduce a synthetic data generator (SDG) that captures essential temporal structures in client data, including autoregressive dependencies, seasonality, and trend, while incorporating client-specific heterogeneity. Building on this model, we define a transformation that maps time series windows into an intrinsic representation space with well-defined geometric and statistical properties. We then derive a decomposition of the forecasting loss into a Bayesian term, which reflects irreducible uncertainty, and an approximation term, which accounts for finite-sample effects and limited model capacity. Our analysis shows that while increasing the look-back horizon improves the identifiability of deterministic patterns, it also increases approximation error due to higher model complexity and reduced sample efficiency. We prove that the total forecasting loss is minimized at the smallest horizon where the irreducible loss starts to saturate, while the approximation loss continues to rise. This work provides a rigorous theoretical foundation for adaptive horizon selection for time series forecasting in federated learning.
Problem

Research questions and friction points this paper is trying to address.

Addresses optimal look-back horizon selection in federated time series forecasting
Analyzes forecasting loss decomposition into irreducible and approximation errors
Proves minimal loss occurs at horizon where Bayesian loss saturates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data generator captures client temporal heterogeneity
Intrinsic space mapping with geometric statistical properties
Optimal horizon balances irreducible and approximation losses
🔎 Similar Papers
No similar papers found.