🤖 AI Summary
This study investigates the feasibility and robustness of large language models (LLMs) for data extraction in systematic reviews. Addressing gaps in existing LLM tool design and evaluation methodologies, we propose a systematic review–oriented LLM evaluation template and employ domain-specific prompt engineering to rigorously assess GPT-4’s performance across three research domains—clinical, animal, and social sciences—as well as on the EBM-NLP dataset for PICO element identification. Our contributions include the first empirical validation of LLMs as auxiliary human reviewers (e.g., second/third reviewers), revealing cross-domain performance disparities (accuracy: 82% in clinical, 80% in animal, 72% in social sciences) and notable response instability. We further identify that participant and intervention extraction consistently outperforms outcome identification (>80% vs. significantly lower), highlighting outcome extraction as a key bottleneck. Crucially, we demonstrate that fine-grained, human-led evaluation substantially surpasses automated metrics (e.g., BLEU, ROUGE) in reliability and validity.
📝 Abstract
This paper describes a rapid feasibility study of using GPT-4, a large language model (LLM), to (semi)automate data extraction in systematic reviews. Despite the recent surge of interest in LLMs there is still a lack of understanding of how to design LLM-based automation tools and how to robustly evaluate their performance. During the 2023 Evidence Synthesis Hackathon we conducted two feasibility studies. Firstly, to automatically extract study characteristics from human clinical, animal, and social science domain studies. We used two studies from each category for prompt-development; and ten for evaluation. Secondly, we used the LLM to predict Participants, Interventions, Controls and Outcomes (PICOs) labelled within 100 abstracts in the EBM-NLP dataset. Overall, results indicated an accuracy of around 80%, with some variability between domains (82% for human clinical, 80% for animal, and 72% for studies of human social sciences). Causal inference methods and study design were the data extraction items with the most errors. In the PICO study, participants and intervention/control showed high accuracy (>80%), outcomes were more challenging. Evaluation was done manually; scoring methods such as BLEU and ROUGE showed limited value. We observed variability in the LLMs predictions and changes in response quality. This paper presents a template for future evaluations of LLMs in the context of data extraction for systematic review automation. Our results show that there might be value in using LLMs, for example as second or third reviewers. However, caution is advised when integrating models such as GPT-4 into tools. Further research on stability and reliability in practical settings is warranted for each type of data that is processed by the LLM.