SpeakerSleuth: Evaluating Large Audio-Language Models as Judges for Multi-turn Speaker Consistency

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing large audio language models (LALMs) struggle to reliably assess speaker acoustic consistency in multi-turn dialogues. To this end, the authors introduce SpeakerSleuth, a benchmark that systematically evaluates nine prominent LALMs across four datasets on three practical tasks: consistency judgment, error localization, and optimal speaker matching. The evaluation is based on 1,818 human-verified multi-turn dialogue samples with controlled acoustic difficulty. The study reveals that LALMs generally over-rely on textual content while neglecting acoustic cues, leading to significant performance degradation in multi-speaker scenarios. Only in the acoustic matching task do these models show moderate competence, underscoring a critical modality imbalance in their current architectures.

Technology Category

Application Category

📝 Abstract
Large Audio-Language Models (LALMs) as judges have emerged as a prominent approach for evaluating speech generation quality, yet their ability to assess speaker consistency across multi-turn conversations remains unexplored. We present SpeakerSleuth, a benchmark evaluating whether LALMs can reliably judge speaker consistency in multi-turn dialogues through three tasks reflecting real-world requirements. We construct 1,818 human-verified evaluation instances across four diverse datasets spanning synthetic and real speech, with controlled acoustic difficulty. Evaluating nine widely-used LALMs, we find that models struggle to reliably detect acoustic inconsistencies. For instance, given audio samples of the same speaker's turns, some models overpredict inconsistency, whereas others are overly lenient. Models further struggle to identify the exact turns that are problematic. When other interlocutors'turns are provided together, performance degrades dramatically as models prioritize textual coherence over acoustic cues, failing to detect even obvious gender switches for a speaker. On the other hand, models perform substantially better in choosing the audio that best matches the speaker among several acoustic variants, demonstrating inherent acoustic discrimination capabilities. These findings expose a significant bias in LALMs: they tend to prioritize text over acoustics, revealing fundamental modality imbalances that need to be addressed to build reliable audio-language judges.
Problem

Research questions and friction points this paper is trying to address.

speaker consistency
large audio-language models
multi-turn dialogue
acoustic inconsistency
modality bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Audio-Language Models
speaker consistency
multi-turn dialogue
acoustic-textual bias
evaluation benchmark
🔎 Similar Papers
No similar papers found.
J
Jonggeun Lee
Graduate School of Data Science, Seoul National University
J
Junseong Pyo
Department of Information Systems, Hanyang University
G
Gyuhyeon Seo
Graduate School of Data Science, Seoul National University
Yohan Jo
Yohan Jo
Seoul National University
Natural Language ProcessingAgentsComputational PsychologyReasoning