🤖 AI Summary
A lack of high-challenge evaluation benchmarks hinders rigorous assessment of large language models (LLMs) for Japanese financial analysis.
Method: This paper introduces EDINET-Bench, the first open-source benchmark for Japanese financial statement understanding. It is constructed from ten years of publicly filed annual reports in Japan’s Electronic Disclosure for Investors’ Network (EDINET), with automatic annotation via rule-based and heuristic methods. The benchmark supports three challenging tasks: accounting fraud detection, earnings forecasting, and industry classification.
Contribution/Results: EDINET-Bench establishes the first systematic, fine-grained evaluation framework for Japanese financial NLP, employing multi-task binary/multi-class classification protocols. Experiments reveal that current state-of-the-art LLMs achieve only 55–60% accuracy on core tasks—substantially below human expert performance—demonstrating its diagnostic utility and underscoring the need for domain-specific adaptation. EDINET-Bench provides a robust infrastructure for developing and evaluating Japanese financial LLMs.
📝 Abstract
Financial analysis presents complex challenges that could leverage large language model (LLM) capabilities. However, the scarcity of challenging financial datasets, particularly for Japanese financial data, impedes academic innovation in financial analytics. As LLMs advance, this lack of accessible research resources increasingly hinders their development and evaluation in this specialized domain. To address this gap, we introduce EDINET-Bench, an open-source Japanese financial benchmark designed to evaluate the performance of LLMs on challenging financial tasks including accounting fraud detection, earnings forecasting, and industry prediction. EDINET-Bench is constructed by downloading annual reports from the past 10 years from Japan's Electronic Disclosure for Investors' NETwork (EDINET) and automatically assigning labels corresponding to each evaluation task. Our experiments reveal that even state-of-the-art LLMs struggle, performing only slightly better than logistic regression in binary classification for fraud detection and earnings forecasting. These results highlight significant challenges in applying LLMs to real-world financial applications and underscore the need for domain-specific adaptation. Our dataset, benchmark construction code, and evaluation code is publicly available to facilitate future research in finance with LLMs.