FIRE: A Comprehensive Benchmark for Financial Intelligence and Reasoning Evaluation

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a systematic evaluation benchmark for assessing the theoretical understanding and practical reasoning capabilities of large language models (LLMs) in the financial domain. To this end, we propose FIRE, a comprehensive evaluation benchmark that, for the first time, integrates questions from financial certification exams with real-world business scenarios. FIRE comprises 3,000 structured questions and open-ended problems, accompanied by standardized scoring rubrics. Leveraging a multidimensional capability taxonomy, we conduct a systematic evaluation of mainstream LLMs, including our in-house model XuanYuan 4.0. Our study not only reveals the current performance boundaries of existing models on financial tasks but also publicly releases the dataset and evaluation code, establishing a reliable benchmark to advance research in financial intelligence.

Technology Category

Application Category

📝 Abstract
We introduce FIRE, a comprehensive benchmark designed to evaluate both the theoretical financial knowledge of LLMs and their ability to handle practical business scenarios. For theoretical assessment, we curate a diverse set of examination questions drawn from widely recognized financial qualification exams, enabling evaluation of LLMs deep understanding and application of financial knowledge. In addition, to assess the practical value of LLMs in real-world financial tasks, we propose a systematic evaluation matrix that categorizes complex financial domains and ensures coverage of essential subdomains and business activities. Based on this evaluation matrix, we collect 3,000 financial scenario questions, consisting of closed-form decision questions with reference answers and open-ended questions evaluated by predefined rubrics. We conduct comprehensive evaluations of state-of-the-art LLMs on the FIRE benchmark, including XuanYuan 4.0, our latest financial-domain model, as a strong in-domain baseline. These results enable a systematic analysis of the capability boundaries of current LLMs in financial applications. We publicly release the benchmark questions and evaluation code to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

financial intelligence
reasoning evaluation
large language models
benchmark
financial applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

financial benchmark
LLM evaluation
reasoning assessment
evaluation matrix
financial intelligence
🔎 Similar Papers
No similar papers found.
Xiyuan Zhang
Xiyuan Zhang
AWS AI
data miningnatural language processingtime-series analysisIoTmobile computing
H
Huihang Wu
The PBC School of Finance, Tsinghua University
J
Jiayu Guo
The School of Finance, Renmin University of China
Z
Zhenlin Zhang
The PBC School of Finance, Tsinghua University
Y
Yiwei Zhang
The PBC School of Finance, Tsinghua University
L
Liangyu Huo
Du Xiaoman Technology
Xiaoxiao Ma
Xiaoxiao Ma
Oracle, Macquarie University
LLMdeep generative modelsanomaly detectiongraph neural networks
J
Jiansong Wan
Du Xiaoman Technology
X
Xuewei Jiao
Du Xiaoman Technology
Yi Jing
Yi Jing
Tsinghua University
LLM Interpretability
J
Jian Xie
Du Xiaoman Technology