🤖 AI Summary
This work addresses the challenge that existing large audio language models (LALMs) struggle to reliably assess speaker acoustic consistency in multi-turn dialogues. To this end, the authors introduce SpeakerSleuth, a benchmark that systematically evaluates nine prominent LALMs across four datasets on three practical tasks: consistency judgment, error localization, and optimal speaker matching. The evaluation is based on 1,818 human-verified multi-turn dialogue samples with controlled acoustic difficulty. The study reveals that LALMs generally over-rely on textual content while neglecting acoustic cues, leading to significant performance degradation in multi-speaker scenarios. Only in the acoustic matching task do these models show moderate competence, underscoring a critical modality imbalance in their current architectures.
📝 Abstract
Large Audio-Language Models (LALMs) as judges have emerged as a prominent approach for evaluating speech generation quality, yet their ability to assess speaker consistency across multi-turn conversations remains unexplored. We present SpeakerSleuth, a benchmark evaluating whether LALMs can reliably judge speaker consistency in multi-turn dialogues through three tasks reflecting real-world requirements. We construct 1,818 human-verified evaluation instances across four diverse datasets spanning synthetic and real speech, with controlled acoustic difficulty. Evaluating nine widely-used LALMs, we find that models struggle to reliably detect acoustic inconsistencies. For instance, given audio samples of the same speaker's turns, some models overpredict inconsistency, whereas others are overly lenient. Models further struggle to identify the exact turns that are problematic. When other interlocutors'turns are provided together, performance degrades dramatically as models prioritize textual coherence over acoustic cues, failing to detect even obvious gender switches for a speaker. On the other hand, models perform substantially better in choosing the audio that best matches the speaker among several acoustic variants, demonstrating inherent acoustic discrimination capabilities. These findings expose a significant bias in LALMs: they tend to prioritize text over acoustics, revealing fundamental modality imbalances that need to be addressed to build reliable audio-language judges.