🤖 AI Summary
Financial time-series forecasting faces critical challenges including information loss due to standardization, rigid model architectures (fixed numbers of variables and window lengths), and insufficient interpretability and uncertainty quantification. To address these, we propose UARPO—a reinforcement learning-based fine-tuning framework enabling large language models to *adaptively assess prediction uncertainty*, dynamically select relevant variables, and adjust sliding window lengths. Furthermore, we introduce FVLDB, the first financial multimodal image–text dataset, and develop an end-to-end time-series reasoning pipeline grounded in vision–language foundation models. Experiments demonstrate that FinZero achieves a 13.48% accuracy gain over GPT-4o on high-confidence samples, significantly improving generalization, model interpretability, and deployment readiness in real-world financial applications.
📝 Abstract
Financial time series forecasting is both highly significant and challenging. Previous approaches typically standardized time series data before feeding it into forecasting models, but this encoding process inherently leads to a loss of important information. Moreover, past time series models generally require fixed numbers of variables or lookback window lengths, which further limits the scalability of time series forecasting. Besides, the interpretability and the uncertainty in forecasting remain areas requiring further research, as these factors directly impact the reliability and practical value of predictions. To address these issues, we first construct a diverse financial image-text dataset (FVLDB) and develop the Uncertainty-adjusted Group Relative Policy Optimization (UARPO) method to enable the model not only output predictions but also analyze the uncertainty of those predictions. We then proposed FinZero, a multimodal pre-trained model finetuned by UARPO to perform reasoning, prediction, and analytical understanding on the FVLDB financial time series. Extensive experiments validate that FinZero exhibits strong adaptability and scalability. After fine-tuning with UARPO, FinZero achieves an approximate 13.48% improvement in prediction accuracy over GPT-4o in the high-confidence group, demonstrating the effectiveness of reinforcement learning fine-tuning in multimodal large model, including in financial time series forecasting tasks.