🤖 AI Summary
This study addresses the lack of high-fidelity, trustworthy AI-based simulated patients for medical education and clinical decision-making. We propose AIPatient: a framework that constructs a clinical knowledge graph from MIMIC-III electronic health records (EHRs) and implements a reasoning-augmented retrieval-augmented generation (RAG) multi-agent workflow—comprising six specialized LLM agents for retrieval, knowledge graph querying, abstraction, verification, rewriting, and summarization—to enable knowledge-driven dynamic reasoning and natural clinician–patient interaction. Its key innovation lies in introducing the first domain-specific clinical knowledge graph and a structured multi-agent reasoning paradigm for healthcare. Experiments demonstrate strong performance: 94.15% accuracy on EHR question answering, a knowledge base F1-score of 0.89, high textual readability (Flesch Reading Ease median = 77.23), and statistical robustness (p > 0.1), all meeting practical deployment requirements.
📝 Abstract
Simulated patient systems play a crucial role in modern medical education and research, providing safe, integrative learning environments and enabling clinical decision-making simulations. Large Language Models (LLM) could advance simulated patient systems by replicating medical conditions and patient-doctor interactions with high fidelity and low cost. However, ensuring the effectiveness and trustworthiness of these systems remains a challenge, as they require a large, diverse, and precise patient knowledgebase, along with a robust and stable knowledge diffusion to users. Here, we developed AIPatient, an advanced simulated patient system with AIPatient Knowledge Graph (AIPatient KG) as the input and the Reasoning Retrieval-Augmented Generation (Reasoning RAG) agentic workflow as the generation backbone. AIPatient KG samples data from Electronic Health Records (EHRs) in the Medical Information Mart for Intensive Care (MIMIC)-III database, producing a clinically diverse and relevant cohort of 1,495 patients with high knowledgebase validity (F1 0.89). Reasoning RAG leverages six LLM powered agents spanning tasks including retrieval, KG query generation, abstraction, checker, rewrite, and summarization. This agentic framework reaches an overall accuracy of 94.15% in EHR-based medical Question Answering (QA), outperforming benchmarks that use either no agent or only partial agent integration. Our system also presents high readability (median Flesch Reading Ease 77.23; median Flesch Kincaid Grade 5.6), robustness (ANOVA F-value 0.6126, p>0.1), and stability (ANOVA F-value 0.782, p>0.1). The promising performance of the AIPatient system highlights its potential to support a wide range of applications, including medical education, model evaluation, and system integration.