🤖 AI Summary
This study addresses the challenge that existing large language models struggle to perform guideline-adherent, multi-stage reasoning in lung cancer diagnosis and treatment. To bridge this gap, the authors formalize three core tasks of precision lung cancer therapy, introduce LungCURE—the first multimodal clinical benchmark based on 1,000 real-world multicenter cases—and propose LCAgent, a multi-agent framework that integrates multimodal large language models with clinical guideline constraints. By enabling collaborative reasoning among agents, LCAgent effectively mitigates cascading errors in complex clinical decision-making. Experimental results demonstrate substantial performance disparities among models in intricate medical reasoning scenarios, and show that LCAgent, as a plug-and-play module, significantly enhances the accuracy of end-to-end clinical decisions.
📝 Abstract
Lung cancer clinical decision support demands precise reasoning across complex, multi-stage oncological workflows. Existing multimodal large language models (MLLMs) fail to handle guideline-constrained staging and treatment reasoning. We formalize three oncological precision treatment (OPT) tasks for lung cancer, spanning TNM staging, treatment recommendation, and end-to-end clinical decision support. We introduce LungCURE, the first standardized multimodal benchmark built from 1,000 real-world, clinician-labeled cases across more than 10 hospitals. We further propose LCAgent, a multi-agent framework that ensures guideline-compliant lung cancer clinical decision-making by suppressing cascading reasoning errors across the clinical pathway. Experiments reveal large differences across various large language models (LLMs) in their capabilities for complex medical reasoning, when given precise treatment requirements. We further verify that LCAgent, as a simple yet effective plugin, enhances the reasoning performance of LLMs in real-world medical scenarios.