🤖 AI Summary
To address the challenge of modeling heterogeneous, irregularly sampled clinical data—such as time-series vital signs, discrete lab values, medication records, X-rays/ECGs, and unstructured clinical notes—for inpatient risk prediction, this paper introduces a novel “data-to-image” paradigm. It transforms structured and semi-structured clinical data into standardized 2D images, then jointly encodes them with textual inputs using a vision-language Transformer for end-to-end cross-modal fusion. This approach eliminates the need for separate modality-specific encoders inherent in conventional multimodal architectures, reframing medical AI development as visual prompt engineering and enabling low-code deployment. Evaluated on MIMIC-IV (6,175 patients), the method achieves up to a 4.2 percentage-point improvement in AUC over state-of-the-art methods on in-hospital mortality prediction and disease phenotyping tasks.
📝 Abstract
A patient undergoes multiple examinations in each hospital stay, where each provides different facets of the health status. These assessments include temporal data with varying sampling rates, discrete single-point measurements, therapeutic interventions such as medication administration, and images. While physicians are able to process and integrate diverse modalities intuitively, neural networks need specific modeling for each modality complicating the training procedure. We demonstrate that this complexity can be significantly reduced by visualizing all information as images along with unstructured text and subsequently training a conventional vision-text transformer. Our approach, Vision Transformer for irregular sampled Multi-modal Measurements (ViTiMM), not only simplifies data preprocessing and modeling but also outperforms current state-of-the-art methods in predicting in-hospital mortality and phenotyping, as evaluated on 6,175 patients from the MIMIC-IV dataset. The modalities include patient's clinical measurements, medications, X-ray images, and electrocardiography scans. We hope our work inspires advancements in multi-modal medical AI by reducing the training complexity to (visual) prompt engineering, thus lowering entry barriers and enabling no-code solutions for training. The source code will be made publicly available.