Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention

๐Ÿ“… 2024-07-26
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal sentiment recognition faces critical challenges including audio-video asynchrony, insufficient feature extraction, and inefficient cross-modal fusion. To address these, we propose AVT-CA, the first dual-stream Transformer architecture explicitly designed for audio and video modalities, integrated with cross-modal cross-attention to enable temporal adaptive alignment and dynamic focusing on emotion-discriminative regions. Furthermore, we introduce multi-level feature alignment and learnable weighted fusion to enhance semantic consistency across modalities. Evaluated on three benchmark datasetsโ€”CMU-MOSEI, RAVDESS, and CREMA-Dโ€”the model achieves an average accuracy improvement of 3.2% over state-of-the-art methods. Notably, it demonstrates superior robustness to modality corruption and significantly enhanced cross-dataset generalization capability.

Technology Category

Application Category

๐Ÿ“ Abstract
Understanding emotions is a fundamental aspect of human communication. Integrating audio and video signals offers a more comprehensive understanding of emotional states compared to traditional methods that rely on a single data source, such as speech or facial expressions. Despite its potential, multimodal emotion recognition faces significant challenges, particularly in synchronization, feature extraction, and fusion of diverse data sources. To address these issues, this paper introduces a novel transformer-based model named Audio-Video Transformer Fusion with Cross Attention (AVT-CA). The AVT-CA model employs a transformer fusion approach to effectively capture and synchronize interlinked features from both audio and video inputs, thereby resolving synchronization problems. Additionally, the Cross Attention mechanism within AVT-CA selectively extracts and emphasizes critical features while discarding irrelevant ones from both modalities, addressing feature extraction and fusion challenges. Extensive experimental analysis conducted on the CMU-MOSEI, RAVDESS and CREMA-D datasets demonstrates the efficacy of the proposed model. The results underscore the importance of AVT-CA in developing precise and reliable multimodal emotion recognition systems for practical applications.
Problem

Research questions and friction points this paper is trying to address.

Overcomes synchronization in multimodal emotion recognition.
Enhances feature extraction from audio and video inputs.
Improves data fusion using transformer-based cross attention.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based fusion model
Cross Attention mechanism
Audio-video feature synchronization
๐Ÿ”Ž Similar Papers
No similar papers found.