🤖 AI Summary
The medical domain lacks standardized benchmarks for evaluating large language model (LLM) agents, hindering systematic assessment of their clinical planning, tool invocation, and multi-step reasoning capabilities in realistic healthcare interactions.
Method: We introduce MedAgentBench—the first comprehensive evaluation benchmark for medical LLM agents—comprising 100 physician-authored clinical tasks, 100 real-patient structured electronic health records (700K+ data elements), and an FHIR-compliant simulated EMR environment. We formally define and implement a medical LLM agent evaluation framework grounded in clinical authenticity, interoperability standards compliance, and unsaturated challenge difficulty.
Contribution/Results: We release an open-source automated evaluation toolkit. Experiments show GPT-4o achieves 72% task success rate, with significant performance variation across clinical task categories. MedAgentBench is fully open-sourced, providing a reproducible, traceable evaluation infrastructure for medical AI agents.
📝 Abstract
Recent large language models (LLMs) have demonstrated significant advancements, particularly in their ability to serve as agents thereby surpassing their traditional role as chatbots. These agents can leverage their planning and tool utilization capabilities to address tasks specified at a high level. However, a standardized dataset to benchmark the agent capabilities of LLMs in medical applications is currently lacking, making the evaluation of LLMs on complex tasks in interactive healthcare environments challenging. To address this gap, we introduce MedAgentBench, a broad evaluation suite designed to assess the agent capabilities of large language models within medical records contexts. MedAgentBench encompasses 100 patient-specific clinically-derived tasks from 10 categories written by human physicians, realistic profiles of 100 patients with over 700,000 data elements, a FHIR-compliant interactive environment, and an accompanying codebase. The environment uses the standard APIs and communication infrastructure used in modern EMR systems, so it can be easily migrated into live EMR systems. MedAgentBench presents an unsaturated agent-oriented benchmark that current state-of-the-art LLMs exhibit some ability to succeed at. The best model (GPT-4o) achieves a success rate of 72%. However, there is still substantial space for improvement to give the community a next direction to optimize. Furthermore, there is significant variation in performance across task categories. MedAgentBench establishes this and is publicly available at https://github.com/stanfordmlgroup/MedAgentBench , offering a valuable framework for model developers to track progress and drive continuous improvements in the agent capabilities of large language models within the medical domain.