🤖 AI Summary
Non-stationarity, heavy-tailed distributions, and high-frequency noise in financial time series severely degrade the forecasting accuracy of Transformer-based time series foundation models (TSFMs). Existing low-rank adaptation (LoRA) methods—constrained by fixed architectures and task-agnostic training objectives—fail to capture domain-specific statistical characteristics of financial data. To address this, we propose RefineBridge, the first generative fine-tuning framework grounded in Schrödinger Bridge theory. It treats the TSFM’s initial prediction as a prior and the ground-truth observation as a target, learning a context-aware, differentiable, progressive probabilistic calibration mapping via stochastic optimal transport. By unifying conditional diffusion and LoRA in a joint, architecture-agnostic end-to-end optimization, RefineBridge achieves consistent improvements across multi-scale financial forecasting tasks, reducing average MAE by 12.7% while enhancing accuracy, stability, and robustness.
📝 Abstract
Financial time series forecasting is particularly challenging for transformer-based time series foundation models (TSFMs) due to non-stationarity, heavy-tailed distributions, and high-frequency noise present in data. Low-rank adaptation (LoRA) has become a popular parameter-efficient method for adapting pre-trained TSFMs to downstream data domains. However, it still underperforms in financial data, as it preserves the network architecture and training objective of TSFMs rather than complementing the foundation model. To further enhance TSFMs, we propose a novel refinement module, RefineBridge, built upon a tractable Schrödinger Bridge (SB) generative framework. Given the forecasts of TSFM as generative prior and the observed ground truths as targets, RefineBridge learns context-conditioned stochastic transport maps to improve TSFM predictions, iteratively approaching the ground-truth target from even a low-quality prior. Simulations on multiple financial benchmarks demonstrate that RefineBridge consistently improves the performance of state-of-the-art TSFMs across different prediction horizons.