🤖 AI Summary
This work addresses the poor performance of large language models (LLMs) on core banking numerical reasoning tasks—such as principal–interest estimation, interest rate comparison, and prepayment penalty calculation—and the lack of existing benchmarks that reflect real-world financial scenarios. To bridge the gap between general mathematical reasoning and domain-specific financial documentation, we introduce the first hierarchical benchmark grounded in authentic banking operations, comprising three progressively challenging task categories: single-product computation, multi-product comparison, and multi-condition compound reasoning. Leveraging this benchmark, we apply tool-augmented fine-tuning to open-source LLMs, substantially improving their ability to generate accurate formulas and perform correct computations. The approach achieves absolute accuracy gains of 57.6, 75.1, and 62.9 percentage points on basic, intermediate, and advanced tasks, respectively, significantly outperforming zero-shot baselines.
📝 Abstract
Large language models (LLMs)-based chatbots are increasingly being adopted in the financial domain, particularly in digital banking, to handle customer inquiries about products such as deposits, savings, and loans. However, these models still exhibit low accuracy in core banking computations-including total payout estimation, comparison of products with varying interest rates, and interest calculation under early repayment conditions. Such tasks require multi-step numerical reasoning and contextual understanding of banking products, yet existing LLMs often make systematic errors-misinterpreting product types, applying conditions incorrectly, or failing basic calculations involving exponents and geometric progressions. However, such errors have rarely been captured by existing benchmarks. Mathematical datasets focus on fundamental math problems, whereas financial benchmarks primarily target financial documents, leaving everyday banking scenarios underexplored. To address this limitation, we propose BankMathBench, a domain-specific dataset that reflects realistic banking tasks. BankMathBench is organized in three levels of difficulty-basic, intermediate, and advanced-corresponding to single-product reasoning, multi-product comparison, and multi-condition scenarios, respectively. When trained on BankMathBench, open-source LLMs exhibited notable improvements in both formula generation and numerical reasoning accuracy, demonstrating the dataset's effectiveness in enhancing domain-specific reasoning. With tool-augmented fine-tuning, the models achieved average accuracy increases of 57.6%p (basic), 75.1%p (intermediate), and 62.9%p (advanced), representing significant gains over zero-shot baselines. These findings highlight BankMathBench as a reliable benchmark for evaluating and advancing LLMs' numerical reasoning in real-world banking scenarios.