VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning

📅 2024-04-10
🏛️ arXiv.org
📈 Citations: 16
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient scene context utilization and severe noise interference in contextual emotion recognition, this paper proposes a two-stage lightweight framework. First, a vision–language large model (VLLM) performs commonsense reasoning to automatically generate emotion-relevant contextual descriptions from input images. Second, these textual descriptions are jointly modeled with image features via a cross-modal fusion Transformer for emotion classification. This work is the first to leverage VLLMs’ commonsense reasoning capability for contextual modeling—eliminating the need for complex scene-encoding architectures or end-to-end multimodal training—thereby significantly improving interpretability and computational efficiency. The method achieves state-of-the-art or competitive performance on three major benchmarks—EMOTIC, CAER-S, and BoLD—validating the complementary nature of visual and textual cues. Notably, it requires no additional annotations or task-specific pretraining, enabling concise and efficient training.

Technology Category

Application Category

📝 Abstract
Recognising emotions in context involves identifying the apparent emotions of an individual, taking into account contextual cues from the surrounding scene. Previous approaches to this task have involved the design of explicit scene-encoding architectures or the incorporation of external scene-related information, such as captions. However, these methods often utilise limited contextual information or rely on intricate training pipelines. In this work, we leverage the groundbreaking capabilities of Vision-and-Large-Language Models (VLLMs) to enhance in-context emotion classification without introducing complexity to the training process in a two-stage approach. In the first stage, we propose prompting VLLMs to generate descriptions in natural language of the subject's apparent emotion relative to the visual context. In the second stage, the descriptions are used as contextual information and, along with the image input, are used to train a transformer-based architecture that fuses text and visual features before the final classification task. Our experimental results show that the text and image features have complementary information, and our fused architecture significantly outperforms the individual modalities without any complex training methods. We evaluate our approach on three different datasets, namely, EMOTIC, CAER-S, and BoLD, and achieve state-of-the-art or comparable accuracy across all datasets and metrics compared to much more complex approaches. The code will be made publicly available on github: https://github.com/NickyFot/EmoCommonSense.git
Problem

Research questions and friction points this paper is trying to address.

Enhancing emotion recognition using contextual cues and common sense reasoning
Simplifying complex training pipelines for emotion classification
Improving performance by fusing visual and textual features effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLLMs generate emotion-context descriptions
Transformer fuses text and visual features
Simplified training improves classification performance
🔎 Similar Papers
No similar papers found.