SimMLM: A Simple Framework for Multi-modal Learning with Missing Modality

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation caused by missing modalities in multimodal learning, this paper proposes SimMLM: a unified framework that adaptively fuses multimodal features via a dynamic gating mechanism—eliminating the need for complex imputation strategies or modality-specific network designs. It introduces a novel Modality-wise Fairness (MoFe) ranking loss to guarantee non-degrading performance as the number of available modalities increases. Furthermore, it develops a Dynamic Mixture of Modality Experts (DMoME) architecture to enhance model robustness and interpretability. SimMLM is trained end-to-end and achieves state-of-the-art performance on multimodal medical image segmentation and classification tasks. Crucially, it maintains high accuracy and stability across both complete-modality and diverse partial-modality settings, demonstrating superior generalization under modality absence.

Technology Category

Application Category

📝 Abstract
In this paper, we propose SimMLM, a simple yet powerful framework for multimodal learning with missing modalities. Unlike existing approaches that rely on sophisticated network architectures or complex data imputation techniques, SimMLM provides a generic and effective solution that can adapt to various missing modality scenarios with improved accuracy and robustness. Specifically, SimMLM consists of a generic Dynamic Mixture of Modality Experts (DMoME) architecture, featuring a dynamic, learnable gating mechanism that automatically adjusts each modality's contribution in both full and partial modality settings. A key innovation of SimMLM is the proposed More vs. Fewer (MoFe) ranking loss, which ensures that task accuracy improves or remains stable as more modalities are made available. This aligns the model with an intuitive principle: removing one or more modalities should not increase accuracy. We validate SimMLM on multimodal medical image segmentation (BraTS 2018) and multimodal classification (UPMC Food-101, avMNIST) tasks, where it consistently surpasses competitive methods, demonstrating superior accuracy, interpretability, robustness, and reliability across both complete and missing modality scenarios at test time.
Problem

Research questions and friction points this paper is trying to address.

Handles missing modalities in multi-modal learning effectively
Improves accuracy and robustness in partial modality scenarios
Ensures stable task accuracy with varying available modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Mixture of Modality Experts (DMoME) architecture
Learnable gating mechanism for modality contribution
More vs. Fewer (MoFe) ranking loss principle
🔎 Similar Papers
No similar papers found.