🤖 AI Summary
In long-term time series forecasting, frequency-domain methods suffer from frequency imbalance—overfitting high-frequency components while underfitting low-frequency ones—due to a uniform optimization objective. To address this, we propose the first frequency-granularity-aware adaptive learning framework. Our method introduces a differentiable frequency-state discriminator that dynamically monitors convergence behavior across frequency components and applies gradient reweighting accordingly, enabling synchronized optimization of multi-scale periodic patterns. Crucially, we embed convergence assessment directly into the frequency-domain training pipeline, yielding an end-to-end trainable architecture. Extensive experiments on seven real-world datasets demonstrate that our approach consistently outperforms state-of-the-art methods, achieving an average 9.2% reduction in MAE and significantly improving both accuracy and stability in long-horizon forecasting.
📝 Abstract
Time-series forecasting is crucial for numerous real-world applications including weather prediction and financial market modeling. While temporal-domain methods remain prevalent, frequency-domain approaches can effectively capture multi-scale periodic patterns, reduce sequence dependencies, and naturally denoise signals. However, existing approaches typically train model components for all frequencies under a unified training objective, often leading to mismatched learning speeds: high-frequency components converge faster and risk overfitting, while low-frequency components underfit due to insufficient training time. To deal with this challenge, we propose BEAT (Balanced frEquency Adaptive Tuning), a novel framework that dynamically monitors the training status for each frequency and adaptively adjusts their gradient updates. By recognizing convergence, overfitting, or underfitting for each frequency, BEAT dynamically reallocates learning priorities, moderating gradients for rapid learners and increasing those for slower ones, alleviating the tension between competing objectives across frequencies and synchronizing the overall learning process. Extensive experiments on seven real-world datasets demonstrate that BEAT consistently outperforms state-of-the-art approaches.