MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical QA benchmarks feature simplistic questions, lack standardized evaluation protocols, and omit systematic analysis of the performance–cost–latency trade-off, hindering rigorous differentiation of advanced models’ capabilities in complex clinical reasoning (e.g., multi-step diagnosis and treatment planning). Method: We introduce MedAgentsBench—the first standardized benchmark for high-order clinical reasoning—built upon seven authoritative medical datasets. It establishes unified sampling and multidimensional evaluation protocols to systematically assess reasoning models (e.g., DeepSeek R1, OpenAI o3) and search-augmented agents. Contribution/Results: Our evaluation reveals substantial capability disparities across model families, identifies search-augmented agents as offering superior cost-effectiveness, and proposes a constraint-aware model selection strategy. All code and benchmark resources are fully open-sourced to enable reproducible, comparable medical AI evaluation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown impressive performance on existing medical question-answering benchmarks. This high performance makes it increasingly difficult to meaningfully evaluate and differentiate advanced methods. We present MedAgentsBench, a benchmark that focuses on challenging medical questions requiring multi-step clinical reasoning, diagnosis formulation, and treatment planning-scenarios where current models still struggle despite their strong performance on standard tests. Drawing from seven established medical datasets, our benchmark addresses three key limitations in existing evaluations: (1) the prevalence of straightforward questions where even base models achieve high performance, (2) inconsistent sampling and evaluation protocols across studies, and (3) lack of systematic analysis of the interplay between performance, cost, and inference time. Through experiments with various base models and reasoning methods, we demonstrate that the latest thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in complex medical reasoning tasks. Additionally, advanced search-based agent methods offer promising performance-to-cost ratios compared to traditional approaches. Our analysis reveals substantial performance gaps between model families on complex questions and identifies optimal model selections for different computational constraints. Our benchmark and evaluation framework are publicly available at https://github.com/gersteinlab/medagents-benchmark.
Problem

Research questions and friction points this paper is trying to address.

Evaluating advanced methods in complex medical reasoning
Addressing limitations in existing medical benchmarks
Analyzing performance, cost, and inference time trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MedAgentsBench for complex medical reasoning
Evaluates DeepSeek R1 and OpenAI o3 models
Analyzes performance-cost ratios of search-based agents
🔎 Similar Papers
No similar papers found.