🤖 AI Summary
This work proposes a context-based learning approach that leverages large language models (LLMs) for volatility forecasting in non-stationary financial markets without requiring fine-tuning. Addressing the challenge of dynamically shifting volatility across distinct market regimes, the method introduces an oracle-guided exemplar construction mechanism coupled with a conditional sampling strategy. This enables the LLM to adaptively generate contextually aligned examples based on estimated market states, thereby enhancing prediction accuracy. Empirical evaluations across multiple financial datasets demonstrate that the proposed approach significantly outperforms conventional models and standard in-context learning baselines, with particularly pronounced gains during periods of high market volatility. The results establish a novel paradigm for applying LLMs to non-stationary time series modeling in finance.
📝 Abstract
This work introduces a regime-aware in-context learning framework that leverages large language models (LLMs) for financial volatility forecasting under nonstationary market conditions. The proposed approach deploys pretrained LLMs to reason over historical volatility patterns and adjust their predictions without parameter fine-tuning. We develop an oracle-guided refinement procedure that constructs regime-aware demonstrations from training data. An LLM is then deployed as an in-context learner that predicts the next-step volatility from the input sequence using demonstrations sampled conditional to the estimated market label. This conditional sampling strategy enables the LLM to adapt its predictions to regime-dependent volatility dynamics through contextual reasoning alone. Experiments with multiple financial datasets show that the proposed regime-aware in-context learning framework outperforms both classical volatility forecasting approaches and direct one-shot learning, especially during high-volatility periods.