🤖 AI Summary
Existing large language models for Traditional Chinese Medicine (TCM) lack a unified, standardized multimodal question-answering (QA) evaluation benchmark.
Method: We introduce TCM-Ladder—the first comprehensive multimodal QA benchmark dedicated to TCM—covering core domains including foundational theories, diagnostics, herbal prescriptions, and internal/external medicine, gynecology, and pediatrics. It comprises over 52,000 multimodal items integrating text, images, and videos. We propose a unified multimodal TCM QA evaluation framework and a domain-specific metric, Ladder-Score, jointly measuring terminological accuracy and semantic expressiveness. High-quality data are constructed via web crawling followed by expert validation, and models are trained using a multi-stage reasoning strategy.
Contribution/Results: We systematically evaluate nine general-purpose and five TCM-specialized large models on TCM-Ladder. The benchmark, along with an open-source dataset and a dynamic online leaderboard, is publicly released to advance objective, comparable, and reproducible evaluation in TCM AI research.
📝 Abstract
Traditional Chinese Medicine (TCM), as an effective alternative medicine, has been receiving increasing attention. In recent years, the rapid development of large language models (LLMs) tailored for TCM has underscored the need for an objective and comprehensive evaluation framework to assess their performance on real-world tasks. However, existing evaluation datasets are limited in scope and primarily text-based, lacking a unified and standardized multimodal question-answering (QA) benchmark. To address this issue, we introduce TCM-Ladder, the first multimodal QA dataset specifically designed for evaluating large TCM language models. The dataset spans multiple core disciplines of TCM, including fundamental theory, diagnostics, herbal formulas, internal medicine, surgery, pharmacognosy, and pediatrics. In addition to textual content, TCM-Ladder incorporates various modalities such as images and videos. The datasets were constructed using a combination of automated and manual filtering processes and comprise 52,000+ questions in total. These questions include single-choice, multiple-choice, fill-in-the-blank, diagnostic dialogue, and visual comprehension tasks. We trained a reasoning model on TCM-Ladder and conducted comparative experiments against 9 state-of-the-art general domain and 5 leading TCM-specific LLMs to evaluate their performance on the datasets. Moreover, we propose Ladder-Score, an evaluation method specifically designed for TCM question answering that effectively assesses answer quality regarding terminology usage and semantic expression. To our knowledge, this is the first work to evaluate mainstream general domain and TCM-specific LLMs on a unified multimodal benchmark. The datasets and leaderboard are publicly available at https://tcmladder.com or https://54.211.107.106 and will be continuously updated.