Generative Foundation Model for Structured and Unstructured Electronic Health Records

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Electronic health record (EHR) data exhibit high heterogeneity; existing methods often naively tokenize temporal numerical values as plain text, leading to substantial loss of temporal and quantitative information. To address this, we propose Generative Deep Patient (GDP), the first generative foundation framework natively integrating structured longitudinal EHR data—including vital signs and laboratory measurements—with unstructured clinical narratives. GDP employs a CNN-Transformer encoder to capture temporal dynamics and couples it with an LLaMA-based decoder augmented by cross-modal attention, enabling unified support for both clinical event prediction and clinical note generation. Evaluated on MIMIC-IV, GDP achieves AUROC scores of 0.923 (heart failure), 0.817 (type 2 diabetes), and 0.627 (30-day readmission); for note generation, it attains ROUGE-L = 0.135 and BERTScore-F1 = 0.545. Clinical expert evaluation confirms GDP’s superior factual fidelity and clinical utility.

Technology Category

Application Category

📝 Abstract
Electronic health records (EHRs) are rich clinical data sources but complex repositories of patient data, spanning structured elements (demographics, vitals, lab results, codes), unstructured clinical notes and other modalities of data. Harnessing this heterogeneity is critical for improving patient outcomes. Recent advances in large language models (LLMs) have enabled foundation models that can learn from multiple data modalities and support clinical tasks. However, most current approaches simply serialize numeric EHR data into text, which risks losing temporal and quantitative detail. We introduce Generative Deep Patient (GDP), a multimodal foundation model that natively encodes structured EHR time-series via a CNN-Transformer encoder and fuses it with unstructured EHRs through cross-modal attention into a LLaMA-based decoder. GDP is trained in two stages: (1) generative pretraining, where it learns to produce clinical narratives from raw patient timelines while also performing masked feature prediction (MFP) and next time-step prediction (NTP) to capture temporal dynamics; and (2) multi-task fine-tuning for clinically meaningful predictions (e.g., heart failure, type 2 diabetes, 30-day readmission). In clinical prediction, GDP demonstrated superior performance on MIMIC-IV: heart failure AUROC = 0.923, type 2 diabetes AUROC = 0.817, and 30-day readmission AUROC = 0.627. For narrative generation, GDP achieved ROUGE-L = 0.135 and BERTScore-F1 = 0.545. In a blinded human evaluation, GDP-Instruct scored highest on faithfulness, fluency, and overall clinical utility, suggesting reduced hospital documentation workload without sacrificing accuracy. Our results demonstrate that a single multimodal foundation model can both predict clinically actionable events and generate high-quality clinical narratives. Furthermore, GDP's flexible architecture can be extended to additional modalities.
Problem

Research questions and friction points this paper is trying to address.

Integrating structured and unstructured EHR data for clinical tasks
Preserving temporal and quantitative details in EHR modeling
Generating accurate clinical narratives and predictions simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal CNN-Transformer encoder for structured EHR time-series
Cross-modal attention fusion with LLaMA-based decoder architecture
Two-stage training with generative pretraining and multi-task fine-tuning
🔎 Similar Papers
No similar papers found.
Sonish Sivarajkumar
Sonish Sivarajkumar
Eli Lilly and Company
Generative AINatural Language ProcessingHealthcare AIBiomedical Informatics
H
Hang Zhang
Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA
Yuelyu Ji
Yuelyu Ji
University of Pittsburgh
Natural language processingHealth information detectionLarge language model evaluation
Maneesh Bilalpur
Maneesh Bilalpur
PhD student, University of Pittsburgh
Multimodal Machine LearningAffective ComputingPragmatics of DialogueFace & PoseSpeech
X
Xizhi Wu
Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA
C
Chenyu Li
Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA
M
Min Gu Kwak
Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA
Shyam Visweswaran
Shyam Visweswaran
Professor of Biomedical Informatics, University of Pittsburgh
artificial intelligencemachine learningbiomedical informaticsclinical decision support
Y
Yanshan Wang
Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA; Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA; Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA; Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA