Learning Contrastive Multimodal Fusion with Improved Modality Dropout for Disease Detection and Prediction

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive modality missingness and imbalance in clinical multimodal data, this paper proposes a robust multimodal fusion framework. It introduces learnable modality tokens to enable missingness-aware feature alignment; designs an enhanced modality dropout mechanism to explicitly model modality missing patterns; and incorporates cross-modal contrastive learning to improve generalization under single-modality inputs. The framework seamlessly integrates with state-of-the-art vision foundation models (e.g., CT-specific models) and supports joint modeling of visual data (e.g., medical images) and structured tabular data. Evaluated on large-scale real-world clinical datasets, the method significantly outperforms existing baselines—particularly under partial modality availability—while maintaining high accuracy, computational efficiency, and clinical applicability. Its robustness to heterogeneous missingness patterns, compatibility with modern vision architectures, and strong single-modality performance collectively advance practical deployment of multimodal learning in clinical settings.

Technology Category

Application Category

📝 Abstract
As medical diagnoses increasingly leverage multimodal data, machine learning models are expected to effectively fuse heterogeneous information while remaining robust to missing modalities. In this work, we propose a novel multimodal learning framework that integrates enhanced modalities dropout and contrastive learning to address real-world limitations such as modality imbalance and missingness. Our approach introduces learnable modality tokens for improving missingness-aware fusion of modalities and augments conventional unimodal contrastive objectives with fused multimodal representations. We validate our framework on large-scale clinical datasets for disease detection and prediction tasks, encompassing both visual and tabular modalities. Experimental results demonstrate that our method achieves state-of-the-art performance, particularly in challenging and practical scenarios where only a single modality is available. Furthermore, we show its adaptability through successful integration with a recent CT foundation model. Our findings highlight the effectiveness, efficiency, and generalizability of our approach for multimodal learning, offering a scalable, low-cost solution with significant potential for real-world clinical applications. The code is available at https://github.com/omron-sinicx/medical-modality-dropout.
Problem

Research questions and friction points this paper is trying to address.

Fusing multimodal medical data effectively while handling missing modalities
Addressing modality imbalance and missingness in disease detection and prediction
Improving robustness when only single medical modality is available
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced modality dropout for robust fusion
Learnable tokens improve missingness-aware representation
Contrastive learning with fused multimodal objectives
🔎 Similar Papers
No similar papers found.