ClinicRealm: Re-evaluating Large Language Models with Conventional Machine Learning for Non-Generative Clinical Prediction Tasks

📅 2024-07-26
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
The practical utility of large language models (LLMs) in non-generative clinical prediction tasks remains unclear, with potential underestimation relative to specialized models (e.g., BERT, traditional ML) and risks of misuse due to absent standardized benchmarks. Method: We systematically evaluate 9 GPT-family, 5 BERT-family, and 7 traditional ML models across two clinical prediction settings—unstructured clinical text and structured electronic health records (EHRs)—under zero-shot, few-shot, and fine-tuned regimes. Contribution/Results: First empirical evidence shows state-of-the-art LLMs outperform fine-tuned BERT by +8.2% accuracy in zero-shot settings; open-weight LLMs (e.g., DeepSeek-R1/V3) match or exceed closed-source counterparts (e.g., GPT-4o); and in few-shot EHR tasks, LLMs achieve +5.7% average AUC gain. We propose a data-efficient, prompt-driven paradigm for clinical prediction and demonstrate LLMs’ viability as cost-effective clinical AI tools.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed in medicine. However, their utility in non-generative clinical prediction, often presumed inferior to specialized models, remains under-evaluated, leading to ongoing debate within the field and potential for misuse, misunderstanding, or over-reliance due to a lack of systematic benchmarking. Our ClinicRealm study addresses this by benchmarking 9 GPT-based LLMs, 5 BERT-based models, and 7 traditional methods on unstructured clinical notes and structured Electronic Health Records (EHR). Key findings reveal a significant shift: for clinical note predictions, leading LLMs (e.g., DeepSeek R1/V3, GPT o3-mini-high) in zero-shot settings now decisively outperform finetuned BERT models. On structured EHRs, while specialized models excel with ample data, advanced LLMs (e.g., GPT-4o, DeepSeek R1/V3) show potent zero-shot capabilities, often surpassing conventional models in data-scarce settings. Notably, leading open-source LLMs can match or exceed proprietary counterparts. These results establish modern LLMs as powerful non-generative clinical prediction tools, particularly with unstructured text and offering data-efficient structured data options, thus necessitating a re-evaluation of model selection strategies. This research should serve as an important insight for medical informaticists, AI developers, and clinical researchers, potentially prompting a reassessment of current assumptions and inspiring new approaches to LLM application in predictive healthcare.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs versus specialized models in clinical prediction tasks
Assessing LLM performance on unstructured clinical notes and structured EHRs
Re-evaluating model selection strategies for non-generative healthcare applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking LLMs and BERT models for clinical predictions
LLMs outperform BERT in zero-shot clinical note predictions
Advanced LLMs excel in data-scarce structured EHR settings
🔎 Similar Papers
No similar papers found.
Yinghao Zhu
Yinghao Zhu
The University of Hong Kong
Data MiningAI for Healthcare
Junyi Gao
Junyi Gao
University of Edinburgh
Data MiningAI for healthcare
Zixiang Wang
Zixiang Wang
Peking University
AI for Healthcare
Weibin Liao
Weibin Liao
Peking University
Large Language ModelReinforcement LearningMedical Image Analysis
Xiaochen Zheng
Xiaochen Zheng
ETH Zürich, Zürich, Switzerland, 8092
L
Lifang Liang
National Engineering Research Center for Software Engineering, Peking University, Beijing, China, 100871
Y
Yasha Wang
National Engineering Research Center for Software Engineering, Peking University, Beijing, China, 100871
Chengwei Pan
Chengwei Pan
Beihang University
Virtual RealityComputer GraphicsComputer VisionMedical Image ProcessingDeep Learning
E
Ewen M. Harrison
Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK, EH8 9YL
L
Liantao Ma
National Engineering Research Center for Software Engineering, Peking University, Beijing, China, 100871