I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-Experts

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of modeling heterogeneous cross-modal interactions and ensuring result interpretability in multimodal fusion, this paper proposes an end-to-end Interaction-Aware Mixture-of-Experts (IA-MoE) framework. To tackle interaction modeling, we introduce a novel weakly supervised interaction loss that encourages diverse, collaborative learning across experts. For interpretability, we design a dynamic gating reweighting mechanism integrating both sample-level and dataset-level explanation modules, unifying local and global interpretability. Evaluated on medical and general multimodal benchmarks, IA-MoE achieves significant improvements in classification and diagnostic accuracy, supports early-, late-, and intermediate-level fusion paradigms, and delivers stable, verifiable interaction attributions in real-world scenarios. The core innovation lies in the deep coupling of heterogeneous interaction modeling with multi-granularity interpretability—overcoming the limitations of conventional black-box multimodal fusion approaches.

Technology Category

Application Category

📝 Abstract
Modality fusion is a cornerstone of multimodal learning, enabling information integration from diverse data sources. However, vanilla fusion methods are limited by (1) inability to account for heterogeneous interactions between modalities and (2) lack of interpretability in uncovering the multimodal interactions inherent in the data. To this end, we propose I2MoE (Interpretable Multimodal Interaction-aware Mixture of Experts), an end-to-end MoE framework designed to enhance modality fusion by explicitly modeling diverse multimodal interactions, as well as providing interpretation on a local and global level. First, I2MoE utilizes different interaction experts with weakly supervised interaction losses to learn multimodal interactions in a data-driven way. Second, I2MoE deploys a reweighting model that assigns importance scores for the output of each interaction expert, which offers sample-level and dataset-level interpretation. Extensive evaluation of medical and general multimodal datasets shows that I2MoE is flexible enough to be combined with different fusion techniques, consistently improves task performance, and provides interpretation across various real-world scenarios. Code is available at https://github.com/Raina-Xin/I2MoE.
Problem

Research questions and friction points this paper is trying to address.

Enhance modality fusion by modeling diverse multimodal interactions
Provide interpretability for multimodal interactions at local and global levels
Improve task performance and flexibility across various multimodal datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Mixture-of-Experts for multimodal fusion
Models diverse interactions with weakly supervised losses
Provides local and global interpretability via reweighting
🔎 Similar Papers
No similar papers found.