🤖 AI Summary
This study addresses a critical gap in existing large language model (LLM) evaluation benchmarks, which lack systematic assessment of clinical knowledge and caregiving practices specific to Alzheimer’s disease and related dementias (ADRD). To bridge this gap, we introduce ADRD-Bench—the first domain-specific benchmark for ADRD—comprising multi-source clinical questions and practical scenarios grounded in real-world, evidence-based caregiving practices, all curated under expert guidance. We conduct a comprehensive evaluation of 33 state-of-the-art LLMs, including open-source general-purpose, medical-specialized, and closed-source models. While top-performing models achieve accuracy exceeding 0.9, they exhibit notable deficiencies in reasoning consistency and contextual stability, highlighting key challenges in reliably translating clinical knowledge into everyday caregiving applications.
📝 Abstract
Large language models (LLMs) have shown great potential for healthcare applications. However, existing evaluation benchmarks provide minimal coverage of Alzheimer's Disease and Related Dementias (ADRD). To address this gap, we introduce ADRD-Bench, the first ADRD-specific benchmark dataset designed for rigorous evaluation of LLMs. ADRD-Bench has two components: 1) ADRD Unified QA, a synthesis of 1,352 questions consolidated from seven established medical benchmarks, providing a unified assessment of clinical knowledge; and 2) ADRD Caregiving QA, a novel set of 149 questions derived from the Aging Brain Care (ABC) program, a widely used, evidence-based brain health management program. Guided by a program with national expertise in comprehensive ADRD care, this new set was designed to mitigate the lack of practical caregiving context in existing benchmarks. We evaluated 33 state-of-the-art LLMs on the proposed ADRD-Bench. Results showed that the accuracy of open-weight general models ranged from 0.63 to 0.93 (mean: 0.78; std: 0.09). The accuracy of open-weight medical models ranged from 0.48 to 0.93 (mean: 0.82; std: 0.13). The accuracy of closed-source general models ranged from 0.83 to 0.91 (mean: 0.89; std: 0.03). While top-tier models achieved high accuracies (>0.9), case studies revealed that inconsistent reasoning quality and stability limit their reliability, highlighting a critical need for domain-specific improvement to enhance LLMs'knowledge and reasoning grounded in daily caregiving data. The entire dataset is available at https://github.com/IIRL-ND/ADRD-Bench.