🤖 AI Summary
The absence of automated tools for evaluating financial disclosure quality in Q&A forums on Chinese investor interaction platforms hinders regulatory oversight and market transparency.
Method: We introduce FinDisclose-QA, the first benchmark for assessing financial disclosure quality in Chinese capital markets, comprising 6,000 real-world question-answer pairs manually annotated across four dimensions—completeness, accuracy, readability, and substantive relevance. We formally define and quantify multi-dimensional disclosure quality metrics specific to Q&A contexts and conduct multi-task evaluation using both traditional NLP models and large language models (LLMs) for question identification, answer relevance, readability, and substantive relevance assessment.
Results: Experiments reveal that existing models perform well on question understanding but exhibit significant deficiencies in evaluating answer readability and substantive relevance. FinDisclose-QA provides a reproducible, extensible evaluation infrastructure for regtech applications, auditing practice, and academic research in financial communication.
📝 Abstract
Accurate and transparent financial information disclosure is essential in accounting and finance, fostering trust and enabling informed investment decisions that drive economic development. Among many information disclosure platforms, the Chinese stock exchanges' investor interactive platform provides a novel and interactive way for listed firms to disclose information of interest to investors through an online question-and-answer (Q&A) format. However, it is common for listed firms to respond to questions with limited or no substantive information, and automatically evaluating the quality of financial information disclosure on large amounts of Q&A pairs is challenging. In this study, our interdisciplinary team of AI and finance professionals proposed FinTruthQA, a benchmark designed to evaluate advanced natural language processing (NLP) techniques for the automatic quality assessment of information disclosure in financial Q&A data. It comprises 6,000 real-world financial Q&A entries and each Q&A was manually annotated based on four key evaluation criteria. We benchmarked various NLP techniques on FinTruthQA, including large language models(LLMs). Experiments showed that existing NLP models have strong predictive ability for question identification and question relevance tasks, but are suboptimal for answer readability and answer relevance tasks. By establishing this benchmark, we provide a robust foundation for the automatic evaluation of information disclosure, demonstrating how AI can be leveraged for social good by promoting transparency, fairness, and investor protection in financial disclosure practices. FinTruthQA can be used by auditors, regulators, and financial analysts for real-time monitoring and data-driven decision-making, as well as by researchers for advanced studies in accounting and finance, ultimately fostering greater trust and efficiency in the financial markets.