🤖 AI Summary
Existing evaluations of large language models (LLMs) in Traditional Chinese Medicine (TCM) lack systematicity and clinical authenticity, focusing predominantly on factual question answering while neglecting critical competencies such as diagnostic reasoning, prescription generation, and safety compliance. Method: We introduce MTCMB—the first multi-task TCM benchmark—comprising 12 subtasks across five categories: knowledge QA, linguistic understanding, diagnostic reasoning, prescription generation, and safety assessment. It integrates real-world medical cases, national licensure examination questions, and classical TCM texts, co-developed by certified TCM practitioners. MTCMB uniquely incorporates syndrome differentiation reasoning, herbal formula planning, and contraindication identification, evaluated via domain-specific metrics including multi-granularity annotation, adversarial samples, and syndrome consistency scoring. Results: Experiments reveal that state-of-the-art LLMs perform reasonably on foundational knowledge but exhibit significant deficiencies in clinical reasoning, personalized prescription formulation, and safety judgment. We open-source the benchmark, evaluation toolkit, and baseline results to establish a reproducible standard for TCM AI evaluation.
📝 Abstract
Traditional Chinese Medicine (TCM) is a holistic medical system with millennia of accumulated clinical experience, playing a vital role in global healthcare-particularly across East Asia. However, the implicit reasoning, diverse textual forms, and lack of standardization in TCM pose major challenges for computational modeling and evaluation. Large Language Models (LLMs) have demonstrated remarkable potential in processing natural language across diverse domains, including general medicine. Yet, their systematic evaluation in the TCM domain remains underdeveloped. Existing benchmarks either focus narrowly on factual question answering or lack domain-specific tasks and clinical realism. To fill this gap, we introduce MTCMB-a Multi-Task Benchmark for Evaluating LLMs on TCM Knowledge, Reasoning, and Safety. Developed in collaboration with certified TCM experts, MTCMB comprises 12 sub-datasets spanning five major categories: knowledge QA, language understanding, diagnostic reasoning, prescription generation, and safety evaluation. The benchmark integrates real-world case records, national licensing exams, and classical texts, providing an authentic and comprehensive testbed for TCM-capable models. Preliminary results indicate that current LLMs perform well on foundational knowledge but fall short in clinical reasoning, prescription planning, and safety compliance. These findings highlight the urgent need for domain-aligned benchmarks like MTCMB to guide the development of more competent and trustworthy medical AI systems. All datasets, code, and evaluation tools are publicly available at: https://github.com/Wayyuanyuan/MTCMB.