🤖 AI Summary
Existing multimodal conversational emotion recognition methods typically employ fixed-parameter fusion of multimodal features, which struggles to accommodate the dynamic requirements of different emotion categories and thus limits recognition performance. To address this limitation, this work proposes a Dynamic Fusion-aware Graph Convolutional Network (DF-GCN), which introduces ordinary differential equations into the graph convolutional framework for the first time and designs a global information-guided dynamic prompting mechanism that adaptively adjusts fusion parameters according to the target emotion category. Extensive experiments on two public multimodal conversational datasets demonstrate that the proposed method significantly outperforms state-of-the-art models, validating the effectiveness of the dynamic fusion mechanism in enhancing both accuracy and generalization capability in emotion recognition.
📝 Abstract
Multimodal emotion recognition in conversations (MERC) aims to identify and understand the emotions expressed by speakers during utterance interaction from multiple modalities (e.g., text, audio, images, etc.). Existing studies have shown that GCN can improve the performance of MERC by modeling dependencies between speakers. However, existing methods usually use fixed parameters to process multimodal features for different emotion types, ignoring the dynamics of fusion between different modalities, which forces the model to balance performance between multiple emotion categories, thus limiting the model's performance on some specific emotions. To this end, we propose a dynamic fusion-aware graph convolutional neural network (DF-GCN) for robust recognition of multimodal emotion features in conversations. Specifically, DF-GCN integrates ordinary differential equations into graph convolutional networks (GCNs) to {capture} the dynamic nature of emotional dependencies within utterance interaction networks and leverages the prompts generated by the global information vector (GIV) of the utterance to guide the dynamic fusion of multimodal features. This allows our model to dynamically change parameters when processing each utterance feature, so that different network parameters can be equipped for different emotion categories in the inference stage, thereby achieving more flexible emotion classification and enhancing the generalization ability of the model. Comprehensive experiments conducted on two public multimodal conversational datasets {confirm} that the proposed DF-GCN model delivers superior performance, benefiting significantly from the dynamic fusion mechanism introduced.