π€ AI Summary
Current AI systems exhibit poor generalizability and low accuracy in cross-disease medical literature retrieval, screening, and data extraction. To address this, we propose LEADSβthe first foundation model explicitly designed for human-AI collaborative medical literature mining, deeply aligned with systematic review (SR) expert workflows. LEADS is fine-tuned on 630K high-quality medical instructions derived from 21K SRs, 450K clinical trial publications, and 27K trial registry entries, enabling traceable reference generation and interactive task execution. On six core biomedical NLP tasks, LEADS consistently outperforms four leading large language models. Under expert collaboration, it achieves a study screening recall of 0.81 (+4% absolute improvement) and data extraction accuracy of 0.85 (+5%), while reducing average processing time by over 25%. These advances significantly enhance the efficiency and reliability of evidence-based medicine practice.
π Abstract
Systematic literature review is essential for evidence-based medicine, requiring comprehensive analysis of clinical trial publications. However, the application of artificial intelligence (AI) models for medical literature mining has been limited by insufficient training and evaluation across broad therapeutic areas and diverse tasks. Here, we present LEADS, an AI foundation model for study search, screening, and data extraction from medical literature. The model is trained on 633,759 instruction data points in LEADSInstruct, curated from 21,335 systematic reviews, 453,625 clinical trial publications, and 27,015 clinical trial registries. We showed that LEADS demonstrates consistent improvements over four cutting-edge generic large language models (LLMs) on six tasks. Furthermore, LEADS enhances expert workflows by providing supportive references following expert requests, streamlining processes while maintaining high-quality results. A study with 16 clinicians and medical researchers from 14 different institutions revealed that experts collaborating with LEADS achieved a recall of 0.81 compared to 0.77 experts working alone in study selection, with a time savings of 22.6%. In data extraction tasks, experts using LEADS achieved an accuracy of 0.85 versus 0.80 without using LEADS, alongside a 26.9% time savings. These findings highlight the potential of specialized medical literature foundation models to outperform generic models, delivering significant quality and efficiency benefits when integrated into expert workflows for medical literature mining.