🤖 AI Summary
Existing EHR studies predominantly rely on unimodal modeling—either structured codes or clinical text alone—or employ naive multimodal fusion, overlooking the intrinsic semantic synergy and complementarity between these modalities.
Method: We propose the first multimodal contrastive learning framework for EHRs that jointly models structured clinical codes and unstructured clinical text. It leverages contrastive representation learning to uncover cross-modal clinical associations. Theoretically, we prove that our contrastive loss is equivalent to singular value decomposition of the pointwise mutual information matrix, providing an interpretable foundation for privacy-preserving representation learning. Methodologically, we design a multimodal embedding generator and a privacy-enhanced training mechanism.
Contribution/Results: Evaluated on real-world EHR data, our approach significantly improves downstream clinical tasks—including disease prediction and risk stratification—with an average AUC gain of 3.2%, demonstrating both clinical efficacy and robustness.
📝 Abstract
Electronic health record (EHR) systems contain a wealth of multimodal clinical data including structured data like clinical codes and unstructured data such as clinical notes. However, many existing EHR-focused studies has traditionally either concentrated on an individual modality or merged different modalities in a rather rudimentary fashion. This approach often results in the perception of structured and unstructured data as separate entities, neglecting the inherent synergy between them. Specifically, the two important modalities contain clinically relevant, inextricably linked and complementary health information. A more complete picture of a patient's medical history is captured by the joint analysis of the two modalities of data. Despite the great success of multimodal contrastive learning on vision-language, its potential remains under-explored in the realm of multimodal EHR, particularly in terms of its theoretical understanding. To accommodate the statistical analysis of multimodal EHR data, in this paper, we propose a novel multimodal feature embedding generative model and design a multimodal contrastive loss to obtain the multimodal EHR feature representation. Our theoretical analysis demonstrates the effectiveness of multimodal learning compared to single-modality learning and connects the solution of the loss function to the singular value decomposition of a pointwise mutual information matrix. This connection paves the way for a privacy-preserving algorithm tailored for multimodal EHR feature representation learning. Simulation studies show that the proposed algorithm performs well under a variety of configurations. We further validate the clinical utility of the proposed algorithm in real-world EHR data.