🤖 AI Summary
This work addresses the absence of effective benchmarks for evaluating empathy in multimodal large language models, particularly with regard to audio-text integration and speech-only empathy assessment. To this end, the authors propose AEQ-Bench, the first dual-setting empathy evaluation benchmark that incorporates both contextual specificity and prosodic variation, enabling fine-grained assessment of models at both linguistic and paralinguistic levels. The benchmark includes two tasks: generating empathetic responses grounded in audiovisual cues, and directly evaluating the empathy quality of speech without relying on textual transcripts. Experimental results demonstrate that multimodal models capable of audio output outperform text-only counterparts in empathetic response generation. While these models exhibit human-level agreement in coarse-grained empathy judgments, they still fall short in fine-grained evaluation of paralinguistic expressive cues.
📝 Abstract
While the automatic evaluation of omni-modal large models (OLMs) is essential, assessing empathy remains a significant challenge due to its inherent affectivity. To investigate this challenge, we introduce AEQ-Bench (Audio Empathy Quotient Benchmark), a novel benchmark to systematically assess two core empathetic capabilities of OLMs: (i) generating empathetic responses by comprehending affective cues from multi-modal inputs (audio + text), and (ii) judging the empathy of audio responses without relying on text transcription. Compared to existing benchmarks, AEQ-Bench incorporates two novel settings that vary in context specificity and speech tone. Comprehensive assessment across linguistic and paralinguistic metrics reveals that (1) OLMs trained with audio output capabilities generally outperformed models with text-only outputs, and (2) while OLMs align with human judgments for coarse-grained quality assessment, they remain unreliable for evaluating fine-grained paralinguistic expressiveness.