FinMultiTime: A Four-Modal Bilingual Dataset for Financial Time-Series Analysis

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing financial time-series datasets are typically limited to single markets and modalities (e.g., prices + news only), suffer from small scale, and lack robust heterogeneous information integration—hindering real-world decision support. To address this, we introduce Fin4M, the first large-scale bilingual (Chinese–English) four-modal financial time-series dataset, synchronously aligned across 5,105 U.S. and A-share stocks from 2009 to 2025, encompassing stock price sequences, candlestick charts, structured financial reports, and bilingual news texts—supporting minute-, daily-, and quarterly-granularity analysis. We propose a cross-market temporal alignment algorithm, a multi-source cleaning pipeline, and a standardized modality-encoding framework (text embedding, tabular serialization, image patching, and time-series normalization) enabling high-resolution, reproducible updates. Experiments demonstrate significant improvements in forecasting accuracy; multimodal fusion consistently enhances performance in Transformer-based models. Fin4M and its processing code are fully open-sourced, establishing the first benchmark for heterogeneous, multi-source financial time-series analysis.

Technology Category

Application Category

📝 Abstract
Pure time series forecasting tasks typically focus exclusively on numerical features; however, real-world financial decision-making demands the comparison and analysis of heterogeneous sources of information. Recent advances in deep learning and large scale language models (LLMs) have made significant strides in capturing sentiment and other qualitative signals, thereby enhancing the accuracy of financial time series predictions. Despite these advances, most existing datasets consist solely of price series and news text, are confined to a single market, and remain limited in scale. In this paper, we introduce FinMultiTime, the first large scale, multimodal financial time series dataset. FinMultiTime temporally aligns four distinct modalities financial news, structured financial tables, K-line technical charts, and stock price time series across both the S&P 500 and HS 300 universes. Covering 5,105 stocks from 2009 to 2025 in the United States and China, the dataset totals 112.6 GB and provides minute-level, daily, and quarterly resolutions, thus capturing short, medium, and long term market signals with high fidelity. Our experiments demonstrate that (1) scale and data quality markedly boost prediction accuracy; (2) multimodal fusion yields moderate gains in Transformer models; and (3) a fully reproducible pipeline enables seamless dataset updates.
Problem

Research questions and friction points this paper is trying to address.

Integrating heterogeneous financial data for better decision-making
Addressing lack of large-scale multimodal financial datasets
Improving time-series prediction with multi-source fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fusion of financial data
Large-scale bilingual dataset
Temporally aligned diverse modalities
🔎 Similar Papers
No similar papers found.
Wenyan Xu
Wenyan Xu
Central University of Finance and Economics
machine learningAI for financeMultimodal learning,LLM
Dawei Xiang
Dawei Xiang
University of Connecticut
computer visionartificial intelligencebiomedical informaticsdeep learning
Y
Yue Liu
National University of Singapore, The University of Sydney, HEC Paris
X
Xiyu Wang
National University of Singapore, The University of Sydney, HEC Paris
Yanxiang Ma
Yanxiang Ma
PhD Student, University of Sydney
Deep LearningAdversarial RobustnessImage Classification
L
Liang Zhang
National University of Singapore, The University of Sydney, HEC Paris
C
Chang Xu
National University of Singapore, The University of Sydney, HEC Paris
Jiaheng Zhang
Jiaheng Zhang
Assistant Professor, National University of Singapore.
Zero-knowledge proofsAI safetyApplied cryptographyBlockchain