TCM-Ladder: A Benchmark for Multimodal Question Answering on Traditional Chinese Medicine

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models for Traditional Chinese Medicine (TCM) lack a unified, standardized multimodal question-answering (QA) evaluation benchmark. Method: We introduce TCM-Ladder—the first comprehensive multimodal QA benchmark dedicated to TCM—covering core domains including foundational theories, diagnostics, herbal prescriptions, and internal/external medicine, gynecology, and pediatrics. It comprises over 52,000 multimodal items integrating text, images, and videos. We propose a unified multimodal TCM QA evaluation framework and a domain-specific metric, Ladder-Score, jointly measuring terminological accuracy and semantic expressiveness. High-quality data are constructed via web crawling followed by expert validation, and models are trained using a multi-stage reasoning strategy. Contribution/Results: We systematically evaluate nine general-purpose and five TCM-specialized large models on TCM-Ladder. The benchmark, along with an open-source dataset and a dynamic online leaderboard, is publicly released to advance objective, comparable, and reproducible evaluation in TCM AI research.

Technology Category

Application Category

📝 Abstract
Traditional Chinese Medicine (TCM), as an effective alternative medicine, has been receiving increasing attention. In recent years, the rapid development of large language models (LLMs) tailored for TCM has underscored the need for an objective and comprehensive evaluation framework to assess their performance on real-world tasks. However, existing evaluation datasets are limited in scope and primarily text-based, lacking a unified and standardized multimodal question-answering (QA) benchmark. To address this issue, we introduce TCM-Ladder, the first multimodal QA dataset specifically designed for evaluating large TCM language models. The dataset spans multiple core disciplines of TCM, including fundamental theory, diagnostics, herbal formulas, internal medicine, surgery, pharmacognosy, and pediatrics. In addition to textual content, TCM-Ladder incorporates various modalities such as images and videos. The datasets were constructed using a combination of automated and manual filtering processes and comprise 52,000+ questions in total. These questions include single-choice, multiple-choice, fill-in-the-blank, diagnostic dialogue, and visual comprehension tasks. We trained a reasoning model on TCM-Ladder and conducted comparative experiments against 9 state-of-the-art general domain and 5 leading TCM-specific LLMs to evaluate their performance on the datasets. Moreover, we propose Ladder-Score, an evaluation method specifically designed for TCM question answering that effectively assesses answer quality regarding terminology usage and semantic expression. To our knowledge, this is the first work to evaluate mainstream general domain and TCM-specific LLMs on a unified multimodal benchmark. The datasets and leaderboard are publicly available at https://tcmladder.com or https://54.211.107.106 and will be continuously updated.
Problem

Research questions and friction points this paper is trying to address.

Lack of unified multimodal QA benchmark for TCM LLMs
Existing TCM evaluation datasets are text-based and limited
Need objective framework to assess TCM LLM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal QA dataset for TCM evaluation
Combines automated and manual filtering processes
Introduces Ladder-Score for answer quality assessment
🔎 Similar Papers
No similar papers found.
J
Jiacheng Xie
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA; Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, USA
Y
Yang Yu
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA; Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, USA
Z
Ziyang Zhang
Department of Computer Science, McCormick School of Engineering, Northwestern University, Chicago, IL, USA
Shuai Zeng
Shuai Zeng
University of Missouri - Columbia
BioinformaticsMachine LearningComputer Science
J
Jiaxuan He
Department of Computer Science and Mathematics, Truman State University, USA
A
Ayush Vasireddy
Marquette High School, Chesterfield, MO, USA
X
Xiaoting Tang
Community Health Service Center Shanghai Pudong New Area, Shanghai, China
C
Congyu Guo
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA; Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, USA
L
Lening Zhao
Yingcai Honors College, University of Electronic Science and Technology of China, Chengdu, China
C
Congcong Jing
Department of Endocrinology, Shanghai Seventh People's Hospital, Shanghai, China
G
Guanghui An
School of Acupuncture and Tuina, Shanghai University of Traditional Chinese Medicine, Shanghai, China
D
Dong Xu
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA; Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, USA