🤖 AI Summary
Addressing the challenges of modeling heterogeneous cross-modal interactions and ensuring result interpretability in multimodal fusion, this paper proposes an end-to-end Interaction-Aware Mixture-of-Experts (IA-MoE) framework. To tackle interaction modeling, we introduce a novel weakly supervised interaction loss that encourages diverse, collaborative learning across experts. For interpretability, we design a dynamic gating reweighting mechanism integrating both sample-level and dataset-level explanation modules, unifying local and global interpretability. Evaluated on medical and general multimodal benchmarks, IA-MoE achieves significant improvements in classification and diagnostic accuracy, supports early-, late-, and intermediate-level fusion paradigms, and delivers stable, verifiable interaction attributions in real-world scenarios. The core innovation lies in the deep coupling of heterogeneous interaction modeling with multi-granularity interpretability—overcoming the limitations of conventional black-box multimodal fusion approaches.
📝 Abstract
Modality fusion is a cornerstone of multimodal learning, enabling information integration from diverse data sources. However, vanilla fusion methods are limited by (1) inability to account for heterogeneous interactions between modalities and (2) lack of interpretability in uncovering the multimodal interactions inherent in the data. To this end, we propose I2MoE (Interpretable Multimodal Interaction-aware Mixture of Experts), an end-to-end MoE framework designed to enhance modality fusion by explicitly modeling diverse multimodal interactions, as well as providing interpretation on a local and global level. First, I2MoE utilizes different interaction experts with weakly supervised interaction losses to learn multimodal interactions in a data-driven way. Second, I2MoE deploys a reweighting model that assigns importance scores for the output of each interaction expert, which offers sample-level and dataset-level interpretation. Extensive evaluation of medical and general multimodal datasets shows that I2MoE is flexible enough to be combined with different fusion techniques, consistently improves task performance, and provides interpretation across various real-world scenarios. Code is available at https://github.com/Raina-Xin/I2MoE.