🤖 AI Summary
This paper addresses the limitations of large language models (LLMs) in graduate-level, knowledge-intensive financial tasks—including multimodal reasoning, time-series forecasting, scenario planning, and numerical modeling—by introducing XFinBench, the first multimodal evaluation benchmark for advanced finance (4,235 samples), accompanied by a domain-specific knowledge base covering 3,032 financial terms. Methodologically, it integrates textual, tabular, and visual inputs within a knowledge-enhanced framework coupled with fine-grained error analysis. Key contributions are: (1) a five-dimensional capability evaluation taxonomy; (2) empirical evidence that knowledge injection yields greater performance gains for smaller models; and (3) identification of computational precision and chart interpretation as critical bottlenecks. Experiments show the best pure-text model (o1) achieves only 67.3% overall accuracy—12.5 percentage points below human experts—highlighting fundamental gaps in LLMs’ capacity for complex financial reasoning.
📝 Abstract
Solving financial problems demands complex reasoning, multimodal data processing, and a broad technical understanding, presenting unique challenges for current large language models (LLMs). We introduce XFinBench, a novel benchmark with 4,235 examples designed to evaluate LLM's ability in solving complex, knowledge-intensive financial problems across diverse graduate-level finance topics with multi-modal context. We identify five core capabilities of LLMs using XFinBench, i.e, terminology understanding, temporal reasoning, future forecasting, scenario planning, and numerical modelling. Upon XFinBench, we conduct extensive experiments on 18 leading models. The result shows that o1 is the best-performing text-only model with an overall accuracy of 67.3%, but still lags significantly behind human experts with 12.5%, especially in temporal reasoning and scenario planning capabilities. We further construct a knowledge bank with 3,032 finance terms for knowledge augmentation analysis, and find that relevant knowledge to the question only brings consistent accuracy improvements to small open-source model. Additionally, our error analysis reveals that rounding errors during calculation and blindness to position and intersection of curves in the image are two primary issues leading to model's poor performance in calculating and visual-context questions, respectively. Code and dataset are accessible via GitHub: https://github.com/Zhihan72/XFinBench.