Arbitrary Data as Images: Fusion of Patient Data Across Modalities and Irregular Intervals with Vision Transformers

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of modeling heterogeneous, irregularly sampled clinical data—such as time-series vital signs, discrete lab values, medication records, X-rays/ECGs, and unstructured clinical notes—for inpatient risk prediction, this paper introduces a novel “data-to-image” paradigm. It transforms structured and semi-structured clinical data into standardized 2D images, then jointly encodes them with textual inputs using a vision-language Transformer for end-to-end cross-modal fusion. This approach eliminates the need for separate modality-specific encoders inherent in conventional multimodal architectures, reframing medical AI development as visual prompt engineering and enabling low-code deployment. Evaluated on MIMIC-IV (6,175 patients), the method achieves up to a 4.2 percentage-point improvement in AUC over state-of-the-art methods on in-hospital mortality prediction and disease phenotyping tasks.

Technology Category

Application Category

📝 Abstract
A patient undergoes multiple examinations in each hospital stay, where each provides different facets of the health status. These assessments include temporal data with varying sampling rates, discrete single-point measurements, therapeutic interventions such as medication administration, and images. While physicians are able to process and integrate diverse modalities intuitively, neural networks need specific modeling for each modality complicating the training procedure. We demonstrate that this complexity can be significantly reduced by visualizing all information as images along with unstructured text and subsequently training a conventional vision-text transformer. Our approach, Vision Transformer for irregular sampled Multi-modal Measurements (ViTiMM), not only simplifies data preprocessing and modeling but also outperforms current state-of-the-art methods in predicting in-hospital mortality and phenotyping, as evaluated on 6,175 patients from the MIMIC-IV dataset. The modalities include patient's clinical measurements, medications, X-ray images, and electrocardiography scans. We hope our work inspires advancements in multi-modal medical AI by reducing the training complexity to (visual) prompt engineering, thus lowering entry barriers and enabling no-code solutions for training. The source code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Healthcare Data Integration
Medical Data Analysis
AI in Healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

ViTiMM
Multimodal Medical Data Visualization
Unified Processing for AI
🔎 Similar Papers
No similar papers found.
M
Malte Tolle
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
M
Mohamad Scharaf
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
S
Samantha Fischer
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
C
Christoph Reich
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
S
Silav Zeid
Preventive Cardiology and Preventive Medicine, Department of Cardiology, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany; Clinical Epidemiology and Systems Medicine, Center for Thrombosis and Hemostasis, University Medical Center Mainz, Johannes Gutenberg University Mainz, Germany; DZHK (German Centre for Cardiovascular Research), partner site Rhine-Main, Mainz, Germany
C
Christoph Dieterich
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
Benjamin Meder
Benjamin Meder
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
N
Norbert Frey
Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany; Informatics for Life Institute, Heidelberg, Germany; DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
P
Philipp Wild
Preventive Cardiology and Preventive Medicine, Department of Cardiology, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany; Clinical Epidemiology and Systems Medicine, Center for Thrombosis and Hemostasis, University Medical Center Mainz, Johannes Gutenberg University Mainz, Germany; DZHK (German Centre for Cardiovascular Research), partner site Rhine-Main, Mainz, Germany; Systems Medicine, Institute of Molecular Biology (IMB), Mainz, Germany
Sandy Engelhardt
Sandy Engelhardt
Full Professor at Heidelberg University
Cardiac Image ProcessingComputer-Assisted SurgeryDeep LearningAugmented Reality