VoiceAssistant-Eval: Benchmarking AI Assistants across Listening, Speaking, and Viewing

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately assess the integrated auditory, vocal, and visual capabilities of speech-first AI assistants. To address this, we introduce VoiceAssistant-Eval—the first comprehensive benchmark systematically evaluating multimodal coordination in voice-centric interaction, comprising 10,497 samples across 13 real-world task categories. It innovatively incorporates multi-turn dialogue, speaker voice imitation, natural sound/music recognition, and complex image understanding, with fine-grained evaluation along three axes: response quality, speech output fidelity, and cross-modal consistency. We evaluate 21 open-source models alongside GPT-4o-Audio. Results reveal that (1) open-source models collectively match or approach closed-source counterparts; (2) smaller models significantly outperform larger ones on specific tasks—particularly speech generation; and (3) substantial limitations persist in audio understanding, multimodal joint reasoning, and role-playing capabilities.

Technology Category

Application Category

📝 Abstract
The growing capabilities of large language models and multimodal systems have spurred interest in voice-first AI assistants, yet existing benchmarks are inadequate for evaluating the full range of these systems' capabilities. We introduce VoiceAssistant-Eval, a comprehensive benchmark designed to assess AI assistants across listening, speaking, and viewing. VoiceAssistant-Eval comprises 10,497 curated examples spanning 13 task categories. These tasks include natural sounds, music, and spoken dialogue for listening; multi-turn dialogue, role-play imitation, and various scenarios for speaking; and highly heterogeneous images for viewing. To demonstrate its utility, we evaluate 21 open-source models and GPT-4o-Audio, measuring the quality of the response content and speech, as well as their consistency. The results reveal three key findings: (1) proprietary models do not universally outperform open-source models; (2) most models excel at speaking tasks but lag in audio understanding; and (3) well-designed smaller models can rival much larger ones. Notably, the mid-sized Step-Audio-2-mini (7B) achieves more than double the listening accuracy of LLaMA-Omni2-32B-Bilingual. However, challenges remain: multimodal (audio plus visual) input and role-play voice imitation tasks are difficult for current models, and significant gaps persist in robustness and safety alignment. VoiceAssistant-Eval identifies these gaps and establishes a rigorous framework for evaluating and guiding the development of next-generation AI assistants. Code and data will be released at https://mathllm.github.io/VoiceAssistantEval/ .
Problem

Research questions and friction points this paper is trying to address.

Evaluating voice-first AI assistants across listening, speaking, and viewing capabilities
Assessing multimodal AI systems using 10,497 examples across 13 task categories
Identifying performance gaps in audio understanding and multimodal input processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VoiceAssistant-Eval benchmark for AI assistants
Evaluates models across listening, speaking, and viewing tasks
Identifies gaps in multimodal input and role-play capabilities
🔎 Similar Papers
No similar papers found.
K
Ke Wang
CUHK MMLab
Houxing Ren
Houxing Ren
Beihang University
Zimu Lu
Zimu Lu
Ph.D. student at the Chinese University of Hong Kong
AI ReasoningLarge Language Model
M
Mingjie Zhan
SenseTime Research
H
Hongsheng Li
CUHK MMLab, CPII under InnoHK