🤖 AI Summary
Existing clinical LLM evaluation benchmarks rely predominantly on medical examination questions or PubMed abstracts, failing to reflect the complexity of real-world electronic health records (EHRs) and suffering from limited linguistic, specialty, and task diversity.
Method: We introduce BRIDGE—the first EHR-centric, multilingual (9 languages), multispecialty, multitask (87 tasks) clinical text understanding benchmark—designed to rigorously evaluate the generalization capabilities of 52 state-of-the-art LLMs. We employ a standardized evaluation protocol, multiple inference paradigms (zero-shot, few-shot, chain-of-thought), and extensive ablation studies (13,572 evaluations).
Contribution/Results: Our analysis reveals that (1) top open-weight models match or surpass closed-source counterparts; (2) medically fine-tuned legacy architectures often underperform modern general-purpose models; and (3) performance varies significantly across language, clinical specialty, and task type. BRIDGE is publicly released with a dynamic leaderboard, establishing a reproducible gold standard for clinical LLM evaluation.
📝 Abstract
Large language models (LLMs) hold great promise for medical applications and are evolving rapidly, with new models being released at an accelerated pace. However, current evaluations of LLMs in clinical contexts remain limited. Most existing benchmarks rely on medical exam-style questions or PubMed-derived text, failing to capture the complexity of real-world electronic health record (EHR) data. Others focus narrowly on specific application scenarios, limiting their generalizability across broader clinical use. To address this gap, we present BRIDGE, a comprehensive multilingual benchmark comprising 87 tasks sourced from real-world clinical data sources across nine languages. We systematically evaluated 52 state-of-the-art LLMs (including DeepSeek-R1, GPT-4o, Gemini, and Llama 4) under various inference strategies. With a total of 13,572 experiments, our results reveal substantial performance variation across model sizes, languages, natural language processing tasks, and clinical specialties. Notably, we demonstrate that open-source LLMs can achieve performance comparable to proprietary models, while medically fine-tuned LLMs based on older architectures often underperform versus updated general-purpose models. The BRIDGE and its corresponding leaderboard serve as a foundational resource and a unique reference for the development and evaluation of new LLMs in real-world clinical text understanding.