URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue Models

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end spoken dialogue models (SDMs) lack systematic evaluation tailored to speech-to-speech (S2S) scenarios—particularly regarding multilingualism, multi-turn interaction, and paralinguistic cues (e.g., prosody, emotion)—resulting in substantially weaker instruction following and audio understanding capabilities compared to foundational large language models. Method: We introduce the first comprehensive S2S spoken dialogue benchmark, spanning three core dimensions and two difficulty tracks across 36 datasets; propose the URO framework—a unified, three-dimensional evaluation scheme for multilingual, multi-turn, and paralinguistic competence; and integrate ASR/TTS pipelines, multimodal audio understanding, and LLM-driven assessment, augmented by human annotation and automated metrics (WER, SER, DST, paralinguistic F1). Contribution/Results: Experiments expose severe instruction forgetting and paralinguistic modeling deficiencies in current open-source SDMs, establishing a reproducible, fine-grained diagnostic toolkit for the community.

Technology Category

Application Category

📝 Abstract
In recent years, with advances in large language models (LLMs), end-to-end spoken dialogue models (SDMs) have made significant strides. Compared to text-based LLMs, the evaluation of SDMs needs to take speech-related aspects into account, such as paralinguistic information and speech quality. However, there is still a lack of comprehensive evaluations for SDMs in speech-to-speech (S2S) scenarios. To address this gap, we propose URO-Bench, an extensive benchmark for SDMs. Notably, URO-Bench is the first S2S benchmark that covers evaluations about multilingualism, multi-round dialogues, and paralinguistics. Our benchmark is divided into two difficulty levels: basic track and pro track, consisting of 16 and 20 datasets respectively, evaluating the model's abilities in Understanding, Reasoning, and Oral conversation. Evaluations on our proposed benchmark reveal that current open-source SDMs perform rather well in daily QA tasks, but lag behind their backbone LLMs in terms of instruction-following ability and also suffer from catastrophic forgetting. Their performance in advanced evaluations of paralinguistic information and audio understanding remains subpar, highlighting the need for further research in this direction. We hope that URO-Bench can effectively facilitate the development of spoken dialogue models by providing a multifaceted evaluation of existing models and helping to track progress in this area.
Problem

Research questions and friction points this paper is trying to address.

Evaluates end-to-end spoken dialogue models
Assesses multilingualism and paralinguistics in S2S scenarios
Identifies gaps in instruction-following and audio understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual and multi-round dialogue evaluation
Paralinguistics and speech quality focus
Two-track benchmark with varied difficulty
🔎 Similar Papers
No similar papers found.
Ruiqi Yan
Ruiqi Yan
Shanghai Jiao Tong University
Deep learningAudioSpeech
Xiquan Li
Xiquan Li
Shanghai Jiao Tong University
Audio UnderstandingAudio GenerationLarge Language Models
W
Wenxi Chen
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Shanghai Jiao Tong University
Zhikang Niu
Zhikang Niu
Shanghai Jiao Tong University
Speech Synthesis
C
Chen Yang
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Shanghai Jiao Tong University
Z
Ziyang Ma
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Shanghai Jiao Tong University
K
Kai Yu
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Shanghai Jiao Tong University
X
Xie Chen
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Shanghai Jiao Tong University