MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of non-uniform modality missingness in real-world multimodal sentiment analysis, which often leads to training bias and unfair modality contributions—issues inadequately captured by existing evaluation protocols. To this end, we propose MissBench, a benchmark framework that establishes standardized evaluation protocols for both fully observed and non-uniformly missing modalities across four widely used sentiment datasets. We further introduce two novel metrics: the Modality Equity Index (MEI) and the Modality Learning Index (MLI), which, for the first time, quantitatively assess model fairness and optimization balance under missing-modality conditions. Experimental results reveal that models demonstrating robustness under uniformly shared missingness can still exhibit significant modality unfairness under non-uniform missingness, thereby underscoring the necessity and effectiveness of MissBench.

Technology Category

Application Category

📝 Abstract
Multimodal affective computing underpins key tasks such as sentiment analysis and emotion recognition. Standard evaluations, however, often assume that textual, acoustic, and visual modalities are equally available. In real applications, some modalities are systematically more fragile or expensive, creating imbalanced missing rates and training biases that task-level metrics alone do not reveal. We introduce MissBench, a benchmark and framework for multimodal affective tasks that standardizes both shared and imbalanced missing-rate protocols on four widely used sentiment and emotion datasets. MissBench also defines two diagnostic metrics. The Modality Equity Index (MEI) measures how fairly different modalities contribute across missing-modality configurations. The Modality Learning Index (MLI) quantifies optimization imbalance by comparing modality-specific gradient norms during training, aggregated across modality-related modules. Experiments on representative method families show that models that appear robust under shared missing rates can still exhibit marked modality inequity and optimization imbalance under imbalanced conditions. These findings position MissBench, together with MEI and MLI, as practical tools for stress-testing and analyzing multimodal affective models in realistic incomplete-modality settings.For reproducibility, we release our code at: https://anonymous.4open.science/r/MissBench-4098/
Problem

Research questions and friction points this paper is trying to address.

multimodal affective computing
imbalanced missing modalities
modality equity
optimization imbalance
sentiment analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

MissBench
imbalanced missing modalities
Modality Equity Index
Modality Learning Index
multimodal affective computing