Exploring the use of a Large Language Model for data extraction in systematic reviews: a rapid feasibility study

📅 2024-05-23
🏛️ ALTARS@ECIR
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility and robustness of large language models (LLMs) for data extraction in systematic reviews. Addressing gaps in existing LLM tool design and evaluation methodologies, we propose a systematic review–oriented LLM evaluation template and employ domain-specific prompt engineering to rigorously assess GPT-4’s performance across three research domains—clinical, animal, and social sciences—as well as on the EBM-NLP dataset for PICO element identification. Our contributions include the first empirical validation of LLMs as auxiliary human reviewers (e.g., second/third reviewers), revealing cross-domain performance disparities (accuracy: 82% in clinical, 80% in animal, 72% in social sciences) and notable response instability. We further identify that participant and intervention extraction consistently outperforms outcome identification (>80% vs. significantly lower), highlighting outcome extraction as a key bottleneck. Crucially, we demonstrate that fine-grained, human-led evaluation substantially surpasses automated metrics (e.g., BLEU, ROUGE) in reliability and validity.

Technology Category

Application Category

📝 Abstract
This paper describes a rapid feasibility study of using GPT-4, a large language model (LLM), to (semi)automate data extraction in systematic reviews. Despite the recent surge of interest in LLMs there is still a lack of understanding of how to design LLM-based automation tools and how to robustly evaluate their performance. During the 2023 Evidence Synthesis Hackathon we conducted two feasibility studies. Firstly, to automatically extract study characteristics from human clinical, animal, and social science domain studies. We used two studies from each category for prompt-development; and ten for evaluation. Secondly, we used the LLM to predict Participants, Interventions, Controls and Outcomes (PICOs) labelled within 100 abstracts in the EBM-NLP dataset. Overall, results indicated an accuracy of around 80%, with some variability between domains (82% for human clinical, 80% for animal, and 72% for studies of human social sciences). Causal inference methods and study design were the data extraction items with the most errors. In the PICO study, participants and intervention/control showed high accuracy (>80%), outcomes were more challenging. Evaluation was done manually; scoring methods such as BLEU and ROUGE showed limited value. We observed variability in the LLMs predictions and changes in response quality. This paper presents a template for future evaluations of LLMs in the context of data extraction for systematic review automation. Our results show that there might be value in using LLMs, for example as second or third reviewers. However, caution is advised when integrating models such as GPT-4 into tools. Further research on stability and reliability in practical settings is warranted for each type of data that is processed by the LLM.
Problem

Research questions and friction points this paper is trying to address.

Assessing GPT-4 for systematic review data extraction
Evaluating LLM accuracy across diverse study domains
Developing LLM evaluation templates for automation tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4 for data extraction
Automated PICO prediction
Template for LLM evaluation
🔎 Similar Papers
No similar papers found.
L
Lena Schmidt
National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle, UK
K
Kaitlyn Hair
Centre for Clinical Brain Sciences, University of Edinburgh, UK
S
Sergio Graziozi
UCL Social Research Institute, University College London, London, UK
F
Fiona Campbell
National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle, UK
C
Claudia Kapp
Institute for Quality and Efficiency in Health Care, Cologne, Germany
A
Alireza Khanteymoori
Department of Neurosurgery, Neurocenter, Medical Center - University of Freiburg, Freiburg, Germany
D
Dawn Craig
National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle, UK
M
Mark Engelbert
International Initiative for Impact Evaluation (3ie), School of International Development, University of East Anglia, Norwich, UK
J
James Thomas
UCL Social Research Institute, University College London, London, UK