🤖 AI Summary
Existing multimodal large language model (MLLM)-based sentiment explanations often exhibit inconsistency with predicted labels, undermining interpretability and system trustworthiness. To address this, we propose the Emotion Reasoning Verifier (ERV) and an explanation reward mechanism—achieving, for the first time, dynamic alignment between predictions and explanations without modifying model architecture or requiring additional annotations. Our method integrates ERV into the inference pipeline and employs reinforcement learning–inspired reward optimization to guide explanation generation. Evaluated on MAFW and DFEW benchmarks, our approach improves explanation–prediction consistency by +12.7% and sentiment classification accuracy by +3.4%. Human evaluations further confirm significant gains in interactive trustworthiness and explanation quality, demonstrating that faithful, label-aligned reasoning can be effectively learned through reward-driven refinement.
📝 Abstract
The recent advancement of Multimodal Large Language Models (MLLMs) is transforming human-computer interaction (HCI) from surface-level exchanges into more nuanced and emotionally intelligent communication. To realize this shift, emotion understanding becomes essential allowing systems to capture subtle cues underlying user intent. Furthermore, providing faithful explanations for predicted emotions is crucial to ensure interpretability and build user trust. However, current MLLM-based methods often generate emotion explanations that diverge from the target labels and sometimes even contradict their own predicted emotions. This inconsistency poses a critical risk for misunderstanding and erodes reliability in interactive settings. To address this, we propose a novel approach: the Emotional Rationale Verifier (ERV) and an Explanation Reward. Our method guides the model to produce reasoning that is explicitly consistent with the target emotion during multimodal emotion recognition without modifying the model architecture or requiring additional paired video-description annotations. Our method significantly improves faithful explanation-prediction consistency and explanation emotion accuracy on the MAFW and DFEW datasets. Through extensive experiments and human evaluations, we show that our approach not only enhances alignment between explanation and prediction but also empowers MLLMs to deliver emotionally coherent, trustworthy interactions, marking a key step toward truly human-like HCI systems.