🤖 AI Summary
Evaluating speech quality requires simultaneous consideration of multiple metrics—including Mean Opinion Score (MOS), speaker similarity (SIM), A/B preference judgments, and natural-language quality descriptions—posing challenges for conventional single-task small models. This paper proposes the first end-to-end multi-metric evaluation framework based on auditory large language models (Audio-LLMs), enabling joint prediction across metrics and generative quality explanation. We adapt open-source Audio-LLMs (e.g., SALMONN, Qwen-Audio) via task-specific prompt tuning and validate generative capabilities against Gemini 1.5 Pro. On four major benchmarks—NISQA, BVCC, SOMOS, and VoxSim—our method achieves state-of-the-art performance among small models for MOS and SIM prediction, attains high accuracy in A/B preference classification, and generates fluent, semantically consistent natural-language descriptions. The framework significantly enhances interpretability and generalization in speech quality assessment.
📝 Abstract
Speech quality assessment typically requires evaluating audio from multiple aspects, such as mean opinion score (MOS) and speaker similarity (SIM) etc., which can be challenging to cover using one small model designed for a single task. In this paper, we propose leveraging recently introduced auditory large language models (LLMs) for automatic speech quality assessment. By employing task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B testing results, which are commonly used for evaluating text-to-speech systems. Additionally, the finetuned auditory LLM is able to generate natural language descriptions assessing aspects like noisiness, distortion, discontinuity, and overall quality, providing more interpretable outputs. Extensive experiments have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and Qwen2-Audio. For the natural language descriptions task, a commercial model Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory LLMs achieve competitive performance compared to state-of-the-art task-specific small models in predicting MOS and SIM, while also delivering promising results in A/B testing and natural language descriptions. Our data processing scripts and finetuned model checkpoints can be found at https://github.com/bytedance/SALMONN.