🤖 AI Summary
Addressing challenges in multimodal electronic health record (EHR) integration—including heterogeneous data fusion, high data dependency, and limited interpretability—this paper proposes a hierarchical multi-agent collaborative framework comprising expert, aggregation, and prediction agents. The framework enables controllable translation of non-textual modalities (e.g., medical images and laboratory values) into structured clinical text summaries and supports unified multimodal reasoning. Centered on large language models (LLMs), the method orchestrates heterogeneous EHR data without requiring large-scale annotated datasets, thereby enhancing generalization. Evaluated on three real-world clinical prediction tasks—sepsis early warning, length-of-stay estimation, and in-hospital mortality risk assessment—the approach consistently outperforms state-of-the-art methods, achieving an average AUC improvement of 3.2%. It further demonstrates strong cross-task adaptability and clinically meaningful interpretability.
📝 Abstract
Multimodal electronic health record (EHR) data provide richer, complementary insights into patient health compared to single-modality data. However, effectively integrating diverse data modalities for clinical prediction modeling remains challenging due to the substantial data requirements. We introduce a novel architecture, Mixture-of-Multimodal-Agents (MoMA), designed to leverage multiple large language model (LLM) agents for clinical prediction tasks using multimodal EHR data. MoMA employs specialized LLM agents ("specialist agents") to convert non-textual modalities, such as medical images and laboratory results, into structured textual summaries. These summaries, together with clinical notes, are combined by another LLM ("aggregator agent") to generate a unified multimodal summary, which is then used by a third LLM ("predictor agent") to produce clinical predictions. Evaluating MoMA on three prediction tasks using real-world datasets with different modality combinations and prediction settings, MoMA outperforms current state-of-the-art methods, highlighting its enhanced accuracy and flexibility across various tasks.