Evaluation Before Generation: A Paradigm for Robust Multimodal Sentiment Analysis with Missing Modalities

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and reduced generalization in multimodal sentiment analysis caused by missing modalities by proposing a prompt-based adaptive framework for handling incomplete modalities. The method dynamically evaluates the importance of missing modalities prior to generation, thereby avoiding low-quality imputation. It enhances both local relevance and global consistency through modality-invariant prompt disentanglement, dynamic weighting, and multi-level fusion mechanisms. Leveraging pretrained models, pseudo-labeling, mutual information–based weighting, and attention architectures, the proposed approach achieves state-of-the-art performance on the CMU-MOSI, CMU-MOSEI, and CH-SIMS benchmarks and demonstrates robustness across diverse modality-missing scenarios.
📝 Abstract
The missing modality problem poses a fundamental challenge in multimodal sentiment analysis, significantly degrading model accuracy and generalization in real world scenarios. Existing approaches primarily improve robustness through prompt learning and pre trained models. However, two limitations remain. First, the necessity of generating missing modalities lacks rigorous evaluation. Second, the structural dependencies among multimodal prompts and their global coherence are insufficiently explored. To address these issues, a Prompt based Missing Modality Adaptation framework is proposed. A Missing Modality Evaluator is introduced at the input stage to dynamically assess the importance of missing modalities using pretrained models and pseudo labels, thereby avoiding low quality data imputation. Building on this, a Modality invariant Prompt Disentanglement module decomposes shared prompts into modality specific private prompts to capture intrinsic local correlations and improve representation quality. In addition, a Dynamic Prompt Weighting module computes mutual information based weights from cross attention outputs to adaptively suppress interference from missing modalities. To enhance global consistency, a Multi level Prompt Dynamic Connection module integrates shared prompts with self attention outputs through residual connections, leveraging global prompt priors to strengthen key guidance features. Extensive experiments on three public benchmarks, including CMU MOSI, CMU MOSEI, and CH SIMS, demonstrate that the proposed framework achieves state of the art performance and stable results under diverse missing modality settings. The implementation is available at https://github.com/rongfei-chen/ProMMA
Problem

Research questions and friction points this paper is trying to address.

missing modalities
multimodal sentiment analysis
prompt learning
modality generation
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Missing Modality Evaluation
Prompt Disentanglement
Dynamic Prompt Weighting
Multimodal Sentiment Analysis
Modality Invariance
R
Rongfei Chen
School of Computer Science and Technology, Eastern Institute of Technology, Ningbo, China
T
Tingting Zhang
School of Mechatronic Engineering and Automation, Shanghai University, China
Xiaoyu Shen
Xiaoyu Shen
Eastern Institute of Technology, Ningbo
language modelmulti-modal learningreasoning
Wei Zhang
Wei Zhang
College of Information Science and Technology, Eastern Institute of Technology, Ningbo, China.
reinforcement learningmotion planninghumanoid robotintelligent fault diagnosis