🤖 AI Summary
This work addresses the lack of a systematic evaluation benchmark for assessing the theoretical understanding and practical reasoning capabilities of large language models (LLMs) in the financial domain. To this end, we propose FIRE, a comprehensive evaluation benchmark that, for the first time, integrates questions from financial certification exams with real-world business scenarios. FIRE comprises 3,000 structured questions and open-ended problems, accompanied by standardized scoring rubrics. Leveraging a multidimensional capability taxonomy, we conduct a systematic evaluation of mainstream LLMs, including our in-house model XuanYuan 4.0. Our study not only reveals the current performance boundaries of existing models on financial tasks but also publicly releases the dataset and evaluation code, establishing a reliable benchmark to advance research in financial intelligence.
📝 Abstract
We introduce FIRE, a comprehensive benchmark designed to evaluate both the theoretical financial knowledge of LLMs and their ability to handle practical business scenarios. For theoretical assessment, we curate a diverse set of examination questions drawn from widely recognized financial qualification exams, enabling evaluation of LLMs deep understanding and application of financial knowledge. In addition, to assess the practical value of LLMs in real-world financial tasks, we propose a systematic evaluation matrix that categorizes complex financial domains and ensures coverage of essential subdomains and business activities. Based on this evaluation matrix, we collect 3,000 financial scenario questions, consisting of closed-form decision questions with reference answers and open-ended questions evaluated by predefined rubrics. We conduct comprehensive evaluations of state-of-the-art LLMs on the FIRE benchmark, including XuanYuan 4.0, our latest financial-domain model, as a strong in-domain baseline. These results enable a systematic analysis of the capability boundaries of current LLMs in financial applications. We publicly release the benchmark questions and evaluation code to facilitate future research.