Data Foundations for Large Scale Multimodal Clinical Foundation Models

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current clinical AI research is constrained by narrow modality and task scope, hindering the development of multimodal foundation models for holistic health assessment. To address this, we introduce CLIMB—the first large-scale, integrative multimodal clinical benchmark—comprising 4.51 million patient samples (19.01 TB) spanning 2D/3D medical imaging, temporal physiological signals, graph-structured data, and multimodal combinations. CLIMB systematically unifies and open-sources five core clinical modalities for the first time. Methodologically, we propose a novel framework integrating multimodal standardization and alignment, multi-task contrastive pretraining, modality-specific encoders, and plug-and-play fusion mechanisms—including cross-attention and gated fusion. Experiments demonstrate up to 29% and 23% performance gains on underexplored modalities (e.g., ultrasound and ECG), alongside significantly improved zero-shot and few-shot cross-task generalization. The codebase and data access protocols are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advances in clinical AI have enabled remarkable progress across many clinical domains. However, existing benchmarks and models are primarily limited to a small set of modalities and tasks, which hinders the development of large-scale multimodal methods that can make holistic assessments of patient health and well-being. To bridge this gap, we introduce Clinical Large-Scale Integrative Multimodal Benchmark (CLIMB), a comprehensive clinical benchmark unifying diverse clinical data across imaging, language, temporal, and graph modalities. CLIMB comprises 4.51 million patient samples totaling 19.01 terabytes distributed across 2D imaging, 3D video, time series, graphs, and multimodal data. Through extensive empirical evaluation, we demonstrate that multitask pretraining significantly improves performance on understudied domains, achieving up to 29% improvement in ultrasound and 23% in ECG analysis over single-task learning. Pretraining on CLIMB also effectively improves models' generalization capability to new tasks, and strong unimodal encoder performance translates well to multimodal performance when paired with task-appropriate fusion strategies. Our findings provide a foundation for new architecture designs and pretraining strategies to advance clinical AI research. Code is released at https://github.com/DDVD233/climb.
Problem

Research questions and friction points this paper is trying to address.

Develop large-scale multimodal methods for holistic patient health assessment.
Unify diverse clinical data across multiple modalities for comprehensive benchmarks.
Improve model performance and generalization in understudied clinical domains.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CLIMB benchmark for multimodal clinical data
Multitask pretraining boosts performance in understudied domains
Unimodal encoders enhance multimodal performance with fusion
🔎 Similar Papers
No similar papers found.
W
Wei Dai
Massachusetts Institute of Technology
Peilin Chen
Peilin Chen
University of Virginia
AI ChipsIn-Memory ComputingComputer Architecture
M
Malinda Lu
Massachusetts Institute of Technology
D
Daniel Li
Massachusetts Institute of Technology
H
Haowen Wei
Harvard Medical School
Hejie Cui
Hejie Cui
Stanford University
Large Language ModelsMultimodal LearningData MiningMachine LearningAI for Health