🤖 AI Summary
To address performance degradation caused by missing modalities in multimodal learning, this paper proposes SimMLM: a unified framework that adaptively fuses multimodal features via a dynamic gating mechanism—eliminating the need for complex imputation strategies or modality-specific network designs. It introduces a novel Modality-wise Fairness (MoFe) ranking loss to guarantee non-degrading performance as the number of available modalities increases. Furthermore, it develops a Dynamic Mixture of Modality Experts (DMoME) architecture to enhance model robustness and interpretability. SimMLM is trained end-to-end and achieves state-of-the-art performance on multimodal medical image segmentation and classification tasks. Crucially, it maintains high accuracy and stability across both complete-modality and diverse partial-modality settings, demonstrating superior generalization under modality absence.
📝 Abstract
In this paper, we propose SimMLM, a simple yet powerful framework for multimodal learning with missing modalities. Unlike existing approaches that rely on sophisticated network architectures or complex data imputation techniques, SimMLM provides a generic and effective solution that can adapt to various missing modality scenarios with improved accuracy and robustness. Specifically, SimMLM consists of a generic Dynamic Mixture of Modality Experts (DMoME) architecture, featuring a dynamic, learnable gating mechanism that automatically adjusts each modality's contribution in both full and partial modality settings. A key innovation of SimMLM is the proposed More vs. Fewer (MoFe) ranking loss, which ensures that task accuracy improves or remains stable as more modalities are made available. This aligns the model with an intuitive principle: removing one or more modalities should not increase accuracy. We validate SimMLM on multimodal medical image segmentation (BraTS 2018) and multimodal classification (UPMC Food-101, avMNIST) tasks, where it consistently surpasses competitive methods, demonstrating superior accuracy, interpretability, robustness, and reliability across both complete and missing modality scenarios at test time.