🤖 AI Summary
Current biomedical question-answering models exhibit limited performance in clinical multi-turn diagnosis: single-turn symptom input often leads to ambiguous diagnoses, while supervised learning paradigms suffer from poor generalization and weak critical information extraction. To address these challenges, we propose a multi-agent cooperative reinforcement learning framework tailored for uncertainty-aware decision-making. Our approach introduces, for the first time, a PPO-based physician agent that dynamically generates diagnostic questions. We further construct MTMedDialog—the first English-language multi-turn medical dialogue dataset—and design a reward function grounded in LLM-driven doctor–patient dual-agent simulation and an interpretable Consultation Evaluator. Experiments demonstrate substantial improvements in multi-step reasoning capability and final diagnostic accuracy. The framework exhibits strong generalization and practical utility in real-world clinical decision-support scenarios.
📝 Abstract
Large language models (LLMs) have demonstrated excellent capabilities in the field of biomedical question answering, but their application in real-world clinical consultations still faces core challenges. Existing systems rely on a one-way information transmission mode where patients must fully describe their symptoms in a single round, leading to nonspecific diagnostic recommendations when complaints are vague. Traditional multi-turn dialogue methods based on supervised learning are constrained by static data-driven paradigms, lacking generalizability and struggling to intelligently extract key clinical information. To address these limitations, we propose DoctorAgent-RL, a reinforcement learning (RL)-based multi-agent collaborative framework that models medical consultations as a dynamic decision-making process under uncertainty. The doctor agent continuously optimizes its questioning strategy within the RL framework through multi-turn interactions with the patient agent, dynamically adjusting its information-gathering path based on comprehensive rewards from the Consultation Evaluator. This RL fine-tuning mechanism enables LLMs to autonomously develop interaction strategies aligned with clinical reasoning logic, rather than superficially imitating patterns in existing dialogue data. Notably, we constructed MTMedDialog, the first English multi-turn medical consultation dataset capable of simulating patient interactions. Experiments demonstrate that DoctorAgent-RL outperforms existing models in both multi-turn reasoning capability and final diagnostic performance, demonstrating practical value in assisting clinical consultations. https://github.com/JarvisUSTC/DoctorAgent-RL