🤖 AI Summary
Financial equity research reports (ERRs) suffer from weak empirical foundations and unclear information necessity and automatability. Method: We systematically deconstructed 72 real-world ERRs and, for the first time, inductively identified—without pre-defined categories—169 high-frequency question prototypes at the sentence level, establishing the first empirically grounded ERR question taxonomy. Results: 80% of report content is automatable: 48.2% via text extraction, 30.5% via structured databases, and only 21.3% requiring human judgment. Multi-model validation using Llama-3-70B and GPT-4-turbo revealed strong complementarity, jointly covering ~80% of sentence-generation tasks. This work quantifies the capability boundaries of large language models (LLMs) in ERR automation, enabling concurrent improvements in report quality and efficiency. All question prototypes and their frequencies are publicly released.
📝 Abstract
This research dissects financial equity research reports (ERRs) by mapping their content into categories. There is insufficient empirical analysis of the questions answered in ERRs. In particular, it is not understood how frequently certain information appears, what information is considered essential, and what information requires human judgment to distill into an ERR. The study analyzes 72 ERRs sentence-by-sentence, classifying their 4940 sentences into 169 unique question archetypes. We did not predefine the questions but derived them solely from the statements in the ERRs. This approach provides an unbiased view of the content of the observed ERRs. Subsequently, we used public corporate reports to classify the questions' potential for automation. Answers were labeled"text-extractable"if the answers to the question were accessible in corporate reports. 78.7% of the questions in ERRs can be automated. Those automatable question consist of 48.2% text-extractable (suited to processing by large language models, LLMs) and 30.5% database-extractable questions. Only 21.3% of questions require human judgment to answer. We empirically validate using Llama-3-70B and GPT-4-turbo-2024-04-09 that recent advances in language generation and information extraction enable the automation of approximately 80% of the statements in ERRs. Surprisingly, the models complement each other's strengths and weaknesses well. The research confirms that the current writing process of ERRs can likely benefit from additional automation, improving quality and efficiency. The research thus allows us to quantify the potential impacts of introducing large language models in the ERR writing process. The full question list, including the archetypes and their frequency, will be made available online after peer review.