π€ AI Summary
This work addresses video-based visual emotion understanding by proposing Omni-SILAβthe first unified task jointly modeling explicit cues (e.g., facial expressions) and implicit scene cues (e.g., actions, object relations, background) for integrated emotion recognition, spatiotemporal localization, and attribution explanation. Methodologically, we introduce the Implicit-enhanced Causal Mixture-of-Experts (ICM) architecture, comprising a Scene-Balanced MoE (SBM) and an Implicit-Enhanced Causal (IEC) module, which alleviates reliance on explicit cues via implicit-aware representation learning, multimodal MoE routing, causal inference, and video-temporal modeling. We construct the dual-track Omni-SILA dataset with fine-grained explicit/implicit annotations. Experiments demonstrate that our approach outperforms state-of-the-art Video-LLMs by 12.7% in emotion attribution accuracy and 9.3% in localization mAP.
π Abstract
Prior studies on Visual Sentiment Understanding (VSU) primarily rely on the explicit scene information (e.g., facial expression) to judge visual sentiments, which largely ignore implicit scene information (e.g., human action, objection relation and visual background), while such information is critical for precisely discovering visual sentiments. Motivated by this, this paper proposes a new Omni-scene driven visual Sentiment Identifying, Locating and Attributing in videos (Omni-SILA) task, aiming to interactively and precisely identify, locate and attribute visual sentiments through both explicit and implicit scene information. Furthermore, this paper believes that this Omni-SILA task faces two key challenges: modeling scene and highlighting implicit scene beyond explicit. To this end, this paper proposes an Implicit-enhanced Causal MoE (ICM) approach for addressing the Omni-SILA task. Specifically, a Scene-Balanced MoE (SBM) and an Implicit-Enhanced Causal (IEC) blocks are tailored to model scene information and highlight the implicit scene information beyond explicit, respectively. Extensive experimental results on our constructed explicit and implicit Omni-SILA datasets demonstrate the great advantage of the proposed ICM approach over advanced Video-LLMs.