Lessons Learned from Evaluation of LLM based Multi-agents in Safer Therapy Recommendation

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address drug conflicts and care coordination challenges in managing patients with multimorbidity, this paper proposes a large language model (LLM)-based recommendation framework featuring both single-agent and multi-agent configurations—mimicking general practitioner decision-making and multidisciplinary team (MDT) consultation, respectively. Its key contribution is a novel dual-dimensional evaluation metric integrating clinical goal attainment and medication burden, moving beyond conventional technical performance indicators. Experimental results show that the optimal single-agent configuration achieves 100% clinical goal attainment—matching both the multi-agent system and real-world MDT performance—demonstrating LLMs’ strong potential for complex clinical reasoning. However, the generated regimens exhibit suboptimal regimen completeness and unnecessary polypharmacy, revealing current LLM limitations in fine-grained, evidence-based pharmacotherapy decisions.

Technology Category

Application Category

📝 Abstract
Therapy recommendation for chronic patients with multimorbidity is challenging due to risks of treatment conflicts. Existing decision support systems face scalability limitations. Inspired by the way in which general practitioners (GP) manage multimorbidity patients, occasionally convening multidisciplinary team (MDT) collaboration, this study investigated the feasibility and value of using a Large Language Model (LLM)-based multi-agent system (MAS) for safer therapy recommendations. We designed a single agent and a MAS framework simulating MDT decision-making by enabling discussion among LLM agents to resolve medical conflicts. The systems were evaluated on therapy planning tasks for multimorbidity patients using benchmark cases. We compared MAS performance with single-agent approaches and real-world benchmarks. An important contribution of our study is the definition of evaluation metrics that go beyond the technical precision and recall and allow the inspection of clinical goals met and medication burden of the proposed advices to a gold standard benchmark. Our results show that with current LLMs, a single agent GP performs as well as MDTs. The best-scoring models provide correct recommendations that address all clinical goals, yet the advices are incomplete. Some models also present unnecessary medications, resulting in unnecessary conflicts between medication and conditions or drug-drug interactions.
Problem

Research questions and friction points this paper is trying to address.

Addressing therapy conflicts in chronic multimorbidity patients
Overcoming scalability limits in decision support systems
Evaluating LLM multi-agent systems for safer recommendations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based multi-agent system for therapy recommendations
Simulated MDT decision-making via agent discussions
Evaluation metrics beyond precision and recall
🔎 Similar Papers
No similar papers found.
Yicong Wu
Yicong Wu
Zhejiang University, Hangzhou, China
T
Ting Chen
The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
I
Irit Hochberg
Bruce Rappaport Faculty of Medicine, Technion – Israel Institute of Technology; Hillel Yaffe Medical Center, Hadera, Israel
Zhoujian Sun
Zhoujian Sun
Zhejiang Lab
Medical Decision MakingNatural Language Processing
R
Ruth Edry
Bruce Rappaport Faculty of Medicine, Technion – Israel Institute of Technology; Rambam Medical Center, Haifa, Israel
Zhengxing Huang
Zhengxing Huang
College of Biomedical Engineering and Instrument Science, Zhejiang University
Medical InformaticsHealthcare Data MiningArtificial Intelligence in Medicine
Mor Peleg
Mor Peleg
University of Haifa
medical informaticsinformation systemsartificial intelligencebioinformatics