🤖 AI Summary
This study investigates the feasibility and reliability of leveraging large language models (LLMs) to automatically generate financial reports from financial time series data. Addressing two core challenges—factual accuracy and financial logical reasoning—we propose a systematic framework integrating prompt engineering, multi-model comparative evaluation, and a novel automated information categorization and highlighting mechanism that precisely identifies data-driven insights, financial reasoning chains, and external knowledge dependencies within generated reports. The method is rigorously validated on both real-world market data and controlled synthetic datasets. Experimental results demonstrate that the optimal configuration not only produces syntactically coherent and information-rich reports but also significantly outperforms baselines in key factual recall (+23.6%) and reasoning chain completeness (+19.4%). This work establishes an interpretable evaluation paradigm and practical implementation pathway for trustworthy AI-powered financial reporting.
📝 Abstract
This paper explores the potential of large language models (LLMs) to generate financial reports from time series data. We propose a framework encompassing prompt engineering, model selection, and evaluation. We introduce an automated highlighting system to categorize information within the generated reports, differentiating between insights derived directly from time series data, stemming from financial reasoning, and those reliant on external knowledge. This approach aids in evaluating the factual grounding and reasoning capabilities of the models. Our experiments, utilizing both data from the real stock market indices and synthetic time series, demonstrate the capability of LLMs to produce coherent and informative financial reports.