PRIME: Prototype-Driven Multimodal Pretraining for Cancer Prognosis with Missing Modalities

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of training robust cancer prognosis models from clinical multimodal data, which often suffer from missing modalities. To this end, the authors propose PRIME, a framework that enables missing-aware multimodal representation learning without requiring reconstruction of raw signals, by unifying token embeddings into a shared space and leveraging a shared prototype memory bank. PRIME introduces a patient-level consensus retrieval mechanism for semantic imputation and a structured missingness augmentation strategy, enabling, for the first time, robust self-supervised pretraining without any fully paired samples. The approach jointly optimizes cross-modal alignment and fusion consistency. After unsupervised pretraining on 32 TCGA cancer types, PRIME achieves state-of-the-art average performance across five downstream tasks (C-index: 0.653; AUROC: 0.689/0.637), significantly improving robustness under missing modalities at test time, as well as parameter and label efficiency.
📝 Abstract
Multimodal self-supervised pretraining offers a promising route to cancer prognosis by integrating histopathology whole-slide images, gene expression, and pathology reports, yet most existing approaches require fully paired and complete inputs. In practice, clinical cohorts are fragmented and often miss one or more modalities, limiting both supervised fusion and scalable multimodal pretraining. We propose PRIME, a missing-aware multimodal self-supervised pretraining framework that learns robust and transferable representations from partially observed cohorts. PRIME maps heterogeneous modality embeddings into a unified token space and introduces a shared prototype memory bank for latent-space semantic imputation via patient-level consensus retrieval, producing structurally aligned tokens without reconstructing raw signals. Two complementary pretraining objectives: inter-modality alignment and post-fusion consistency under structured missingness augmentation, jointly learn representations that remain predictive under arbitrary modality subsets. We evaluate PRIME on The Cancer Genome Atlas with label-free pretraining on 32 cancer types and downstream 5-fold evaluation on five cohorts across overall survival prediction, 3-year mortality classification, and 3-year recurrence classification. PRIME achieves the best macro-average performance among all compared methods, reaching 0.653 C-index, 0.689 AUROC, and 0.637 AUROC on the three tasks, respectively, while improving robustness under test-time missingness and supporting parameter-efficient and label-efficient adaptation. These results support missing-aware multimodal pretraining as a practical strategy for prognosis modeling in fragmented clinical data settings.
Problem

Research questions and friction points this paper is trying to address.

multimodal pretraining
missing modalities
cancer prognosis
clinical data fragmentation
incomplete multimodal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

missing-aware pretraining
prototype memory bank
multimodal alignment
semantic imputation
structured missingness augmentation
🔎 Similar Papers
No similar papers found.
Kai Yu
Kai Yu
University of Minnesota
Medical Image AnalysisDeep Learning
Shuang Zhou
Shuang Zhou
University of Minnesota, Hong Kong Polytechnic University
Biomedical InformaticsLarge Language ModelsAI for HealthcareElectronic Health Record
Y
Yiran Song
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA
Zaifu Zhan
Zaifu Zhan
PhD at University of Minnesota, MS at Tsinghua University
Natural language processingMachine LearningAI for BiomedicineLarge Language model
J
Jie Peng
University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Kaixiong Zhou
Kaixiong Zhou
Assistant Professor, North Carolina State University
Machine LearningAI4ScienceGraph Data Mining
Tianlong Chen
Tianlong Chen
Assistant Professor, CS@UNC Chapel Hill; Chief AI Scientist, hireEZ
Machine LearningAI4ScienceComputer VisionSparsity
F
Feng Xie
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA
M
Meng Wang
Centre for Innovation & Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
Huazhu Fu
Huazhu Fu
Principal Scientist, IHPC, A*STAR
Medical Image AnalysisAI for HealthcareMedical AITrustworthy AI
Mingquan Lin
Mingquan Lin
Assistant Professor at University of Minnesota
Medical image analysisDeep learning
R
Rui Zhang
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA