🤖 AI Summary
Multimodal affective computing (MAC) suffers from unstable performance and insufficient understanding of how model architectures and data characteristics jointly influence performance. To address this, we propose a hybrid optimization framework integrating generative knowledge prompting, cross-modal alignment, and supervised fine-tuning, accompanied by a systematic benchmark for comprehensive evaluation of state-of-the-art open-source multimodal large language models (MLLMs) on audio-visual-text fusion-based emotion recognition. Extensive experiments across multiple standard benchmarks demonstrate significant improvements in end-to-end emotion analysis accuracy and robustness. This work provides the first empirical characterization of the synergistic interplay between architectural design choices and data properties in multimodal emotion understanding, establishing an interpretable and reproducible paradigm for MAC model development. The implementation is publicly available.
📝 Abstract
Multimodal Affective Computing (MAC) aims to recognize and interpret human emotions by integrating information from diverse modalities such as text, video, and audio. Recent advancements in Multimodal Large Language Models (MLLMs) have significantly reshaped the landscape of MAC by offering a unified framework for processing and aligning cross-modal information. However, practical challenges remain, including performance variability across complex MAC tasks and insufficient understanding of how architectural designs and data characteristics impact affective analysis. To address these gaps, we conduct a systematic benchmark evaluation of state-of-the-art open-source MLLMs capable of concurrently processing audio, visual, and textual modalities across multiple established MAC datasets. Our evaluation not only compares the performance of these MLLMs but also provides actionable insights into model optimization by analyzing the influence of model architectures and dataset properties. Furthermore, we propose a novel hybrid strategy that combines generative knowledge prompting with supervised fine-tuning to enhance MLLMs' affective computing capabilities. Experimental results demonstrate that this integrated approach significantly improves performance across various MAC tasks, offering a promising avenue for future research and development in this field. Our code is released on https://github.com/LuoMSen/MLLM-MAC.