🤖 AI Summary
This work addresses the challenge of training robust cancer prognosis models from clinical multimodal data, which often suffer from missing modalities. To this end, the authors propose PRIME, a framework that enables missing-aware multimodal representation learning without requiring reconstruction of raw signals, by unifying token embeddings into a shared space and leveraging a shared prototype memory bank. PRIME introduces a patient-level consensus retrieval mechanism for semantic imputation and a structured missingness augmentation strategy, enabling, for the first time, robust self-supervised pretraining without any fully paired samples. The approach jointly optimizes cross-modal alignment and fusion consistency. After unsupervised pretraining on 32 TCGA cancer types, PRIME achieves state-of-the-art average performance across five downstream tasks (C-index: 0.653; AUROC: 0.689/0.637), significantly improving robustness under missing modalities at test time, as well as parameter and label efficiency.
📝 Abstract
Multimodal self-supervised pretraining offers a promising route to cancer prognosis by integrating histopathology whole-slide images, gene expression, and pathology reports, yet most existing approaches require fully paired and complete inputs. In practice, clinical cohorts are fragmented and often miss one or more modalities, limiting both supervised fusion and scalable multimodal pretraining. We propose PRIME, a missing-aware multimodal self-supervised pretraining framework that learns robust and transferable representations from partially observed cohorts. PRIME maps heterogeneous modality embeddings into a unified token space and introduces a shared prototype memory bank for latent-space semantic imputation via patient-level consensus retrieval, producing structurally aligned tokens without reconstructing raw signals. Two complementary pretraining objectives: inter-modality alignment and post-fusion consistency under structured missingness augmentation, jointly learn representations that remain predictive under arbitrary modality subsets. We evaluate PRIME on The Cancer Genome Atlas with label-free pretraining on 32 cancer types and downstream 5-fold evaluation on five cohorts across overall survival prediction, 3-year mortality classification, and 3-year recurrence classification. PRIME achieves the best macro-average performance among all compared methods, reaching 0.653 C-index, 0.689 AUROC, and 0.637 AUROC on the three tasks, respectively, while improving robustness under test-time missingness and supporting parameter-efficient and label-efficient adaptation. These results support missing-aware multimodal pretraining as a practical strategy for prognosis modeling in fragmented clinical data settings.