EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A lack of high-challenge evaluation benchmarks hinders rigorous assessment of large language models (LLMs) for Japanese financial analysis. Method: This paper introduces EDINET-Bench, the first open-source benchmark for Japanese financial statement understanding. It is constructed from ten years of publicly filed annual reports in Japan’s Electronic Disclosure for Investors’ Network (EDINET), with automatic annotation via rule-based and heuristic methods. The benchmark supports three challenging tasks: accounting fraud detection, earnings forecasting, and industry classification. Contribution/Results: EDINET-Bench establishes the first systematic, fine-grained evaluation framework for Japanese financial NLP, employing multi-task binary/multi-class classification protocols. Experiments reveal that current state-of-the-art LLMs achieve only 55–60% accuracy on core tasks—substantially below human expert performance—demonstrating its diagnostic utility and underscoring the need for domain-specific adaptation. EDINET-Bench provides a robust infrastructure for developing and evaluating Japanese financial LLMs.

Technology Category

Application Category

📝 Abstract
Financial analysis presents complex challenges that could leverage large language model (LLM) capabilities. However, the scarcity of challenging financial datasets, particularly for Japanese financial data, impedes academic innovation in financial analytics. As LLMs advance, this lack of accessible research resources increasingly hinders their development and evaluation in this specialized domain. To address this gap, we introduce EDINET-Bench, an open-source Japanese financial benchmark designed to evaluate the performance of LLMs on challenging financial tasks including accounting fraud detection, earnings forecasting, and industry prediction. EDINET-Bench is constructed by downloading annual reports from the past 10 years from Japan's Electronic Disclosure for Investors' NETwork (EDINET) and automatically assigning labels corresponding to each evaluation task. Our experiments reveal that even state-of-the-art LLMs struggle, performing only slightly better than logistic regression in binary classification for fraud detection and earnings forecasting. These results highlight significant challenges in applying LLMs to real-world financial applications and underscore the need for domain-specific adaptation. Our dataset, benchmark construction code, and evaluation code is publicly available to facilitate future research in finance with LLMs.
Problem

Research questions and friction points this paper is trying to address.

Lack of challenging Japanese financial datasets for LLM evaluation
Need for domain-specific benchmarks in financial analytics
Poor performance of LLMs in complex financial tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source Japanese financial benchmark EDINET-Bench
Automated label assignment for evaluation tasks
Public dataset for LLM financial research
Issa Sugiura
Issa Sugiura
Kyoto University
Deep Learning
T
Takashi Ishida
Sakana AI
T
Taro Makino
Sakana AI
C
Chieko Tazuke
Sakana AI
T
Takanori Nakagawa
Sakana AI
K
Kosuke Nakago
Sakana AI
David Ha
David Ha
Sakana AI
artificial intelligence