🤖 AI Summary
Existing speech-to-speech (S2S) large language models lack a systematic evaluation framework tailored for multi-turn dialogue. Method: We introduce MTalk-Bench, the first comprehensive benchmark dedicated to multi-turn S2S models, covering semantic, paralinguistic, and environmental sound dimensions across nine realistic scenarios and specialized tasks. It innovatively integrates Arena-style pairwise comparison and rubric-based scoring—combining human annotation with LLM-as-a-judge for both relative and absolute assessment. Contribution/Results: Experiments reveal strong semantic understanding but critical bottlenecks in paralinguistic perception, environmental sound recognition, response conciseness, and cross-modal adaptation. Crucially, modality-aware architectural design proves more impactful than parameter scaling alone. Both evaluation protocols yield consistent yet complementary results, empirically validating the efficacy—and delineating the boundaries—of LLM-based judgment for S2S evaluation.
📝 Abstract
The rapid advancement of speech-to-speech (S2S) large language models (LLMs) has significantly improved real-time spoken interaction. However, current evaluation frameworks remain inadequate for assessing performance in complex, multi-turn dialogues. To address this, we introduce MTalk-Bench, a multi-turn S2S benchmark covering three core dimensions: Semantic Information, Paralinguistic Information, and Ambient Sound. Each dimension includes nine realistic scenarios, along with targeted tasks to assess specific capabilities such as reasoning. Our dual-method evaluation framework combines Arena-style evaluation (pairwise comparison) and Rubrics-based evaluation (absolute scoring) for relative and absolute assessment. The benchmark includes both model and human outputs, evaluated by human evaluators and LLMs. Experimental results reveal two sets of findings. Overall performance of S2S LLMs: (1) models excel at semantic information processing yet underperform on paralinguistic information and ambient sounds perception; (2) models typically regain coherence by increasing response length, sacrificing efficiency in multi-turn dialogues; (3) modality-aware, task-specific designs outperform brute scaling. Evaluation framework and reliability: (1) Arena and Rubrics yield consistent, complementary rankings, but reliable distinctions emerge only when performance gaps are large; (2) LLM-as-a-judge aligns with humans when gaps are clear or criteria explicit, but exhibits position and length biases and is reliable on nonverbal evaluation only with text annotations. These results highlight current limitations in S2S evaluation and the need for more robust, speech-aware assessment frameworks.