🤖 AI Summary
This work addresses the limitations of existing medical reasoning benchmarks, which rely on static, single-step evaluations and fail to capture the dynamic, multi-stage nature of real-world clinical workflows involving active querying of electronic health records (EHRs) and iterative use of clinical calculators. To bridge this gap, we introduce MedMCP-Calc, the first medical computation benchmark based on the Model Context Protocol (MCP), encompassing 118 scenario-based tasks across four clinical domains. MedMCP-Calc supports ambiguous task descriptions, structured EHR interaction, external tool invocation, and multi-step reasoning, featuring process-level evaluation and integration of SQL database querying, retrieval-augmented generation, and instruction tuning. We further propose CalcMate, a model combining scenario planning with tool augmentation, which achieves state-of-the-art performance among open-source models. Evaluations across 23 prominent large language models reveal significant deficiencies in tool selection and iterative querying capabilities.
📝 Abstract
Medical calculators are fundamental to quantitative, evidence-based clinical practice. However, their real-world use is an adaptive, multi-stage process, requiring proactive EHR data acquisition, scenario-dependent calculator selection, and multi-step computation, whereas current benchmarks focus only on static single-step calculations with explicit instructions. To address these limitations, we introduce MedMCP-Calc, the first benchmark for evaluating LLMs in realistic medical calculator scenarios through Model Context Protocol (MCP) integration. MedMCP-Calc comprises 118 scenario tasks across 4 clinical domains, featuring fuzzy task descriptions mimicking natural queries, structured EHR database interaction, external reference retrieval, and process-level evaluation. Our evaluation of 23 leading models reveals critical limitations: even top performers like Claude Opus 4.5 exhibit substantial gaps, including difficulty selecting appropriate calculators for end-to-end workflows given fuzzy queries, poor performance in iterative SQL-based database interactions, and marked reluctance to leverage external tools for numerical computation. Performance also varies considerably across clinical domains. Building on these findings, we develop CalcMate, a fine-tuned model incorporating scenario planning and tool augmentation, achieving state-of-the-art performance among open-source models. Benchmark and Codes are available in https://github.com/SPIRAL-MED/MedMCP-Calc.