The Structure of Financial Equity Research Reports - Identification of the Most Frequently Asked Questions in Financial Analyst Reports to Automate Equity Research Using Llama 3 and GPT-4

📅 2024-07-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Financial equity research reports (ERRs) suffer from weak empirical foundations and unclear information necessity and automatability. Method: We systematically deconstructed 72 real-world ERRs and, for the first time, inductively identified—without pre-defined categories—169 high-frequency question prototypes at the sentence level, establishing the first empirically grounded ERR question taxonomy. Results: 80% of report content is automatable: 48.2% via text extraction, 30.5% via structured databases, and only 21.3% requiring human judgment. Multi-model validation using Llama-3-70B and GPT-4-turbo revealed strong complementarity, jointly covering ~80% of sentence-generation tasks. This work quantifies the capability boundaries of large language models (LLMs) in ERR automation, enabling concurrent improvements in report quality and efficiency. All question prototypes and their frequencies are publicly released.

Technology Category

Application Category

📝 Abstract
This research dissects financial equity research reports (ERRs) by mapping their content into categories. There is insufficient empirical analysis of the questions answered in ERRs. In particular, it is not understood how frequently certain information appears, what information is considered essential, and what information requires human judgment to distill into an ERR. The study analyzes 72 ERRs sentence-by-sentence, classifying their 4940 sentences into 169 unique question archetypes. We did not predefine the questions but derived them solely from the statements in the ERRs. This approach provides an unbiased view of the content of the observed ERRs. Subsequently, we used public corporate reports to classify the questions' potential for automation. Answers were labeled"text-extractable"if the answers to the question were accessible in corporate reports. 78.7% of the questions in ERRs can be automated. Those automatable question consist of 48.2% text-extractable (suited to processing by large language models, LLMs) and 30.5% database-extractable questions. Only 21.3% of questions require human judgment to answer. We empirically validate using Llama-3-70B and GPT-4-turbo-2024-04-09 that recent advances in language generation and information extraction enable the automation of approximately 80% of the statements in ERRs. Surprisingly, the models complement each other's strengths and weaknesses well. The research confirms that the current writing process of ERRs can likely benefit from additional automation, improving quality and efficiency. The research thus allows us to quantify the potential impacts of introducing large language models in the ERR writing process. The full question list, including the archetypes and their frequency, will be made available online after peer review.
Problem

Research questions and friction points this paper is trying to address.

Identify most frequent questions in financial equity research reports
Determine automation potential of questions using LLMs like Llama 3 and GPT-4
Quantify human judgment vs automatable content in equity reports
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classified 4940 sentences into 169 question archetypes
Automated 78.7% of questions using LLMs
Validated automation with Llama-3 and GPT-4
🔎 Similar Papers
No similar papers found.
A
Adria Pop
University of St. Gallen (HSG), Institute of Computer Science, St. Gallen, Switzerland
J
J. Spörer
University of St. Gallen (HSG), Institute of Computer Science, St. Gallen, Switzerland
Siegfried Handschuh
Siegfried Handschuh
University of St. Gallen (HSG), Institute of Computer Science, St. Gallen, Switzerland