UrgentMOS: Unified Multi-Metric and Preference Learning for Robust Speech Quality Assessment

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing speech quality assessment models, which rely heavily on scarce human-annotated Mean Opinion Score (MOS) data and struggle in cross-dataset scenarios. To overcome this, we propose UrgentMOS, a novel framework that unifies heterogeneous supervision signals—including absolute MOS scores and pairwise comparative MOS (CMOS) preferences—for the first time, enabling training even when only arbitrary subsets of these metrics are available. By integrating multi-task learning, heterogeneous supervision fusion, and explicit preference modeling, UrgentMOS effectively leverages partially labeled data to significantly enhance cross-dataset robustness. Extensive evaluations across multiple speech quality benchmarks demonstrate that UrgentMOS achieves state-of-the-art performance in both absolute and relative scoring tasks, consistently outperforming current methods.

Technology Category

Application Category

📝 Abstract
Automatic speech quality assessment has become increasingly important as modern speech generation systems continue to advance, while human listening tests remain costly, time-consuming, and difficult to scale. Most existing learning-based assessment models rely primarily on scarce human-annotated mean opinion score (MOS) data, which limits robustness and generalization, especially when training across heterogeneous datasets. In this work, we propose UrgentMOS, a unified speech quality assessment framework that jointly learns from diverse objective and perceptual quality metrics, while explicitly tolerating the absence of arbitrary subsets of metrics during training. By leveraging complementary quality facets under heterogeneous supervision, UrgentMOS enables effective utilization of partially annotated data and improves robustness when trained on large-scale, multi-source datasets. Beyond absolute score prediction, UrgentMOS explicitly models pairwise quality preferences by directly predicting comparative MOS (CMOS), making it well suited for preference-based evaluation scenarios commonly adopted in system benchmarking. Extensive experiments across a wide range of speech quality datasets, including simulated distortions, speech enhancement, and speech synthesis, demonstrate that UrgentMOS consistently achieves state-of-the-art performance in both absolute and comparative evaluation settings.
Problem

Research questions and friction points this paper is trying to address.

speech quality assessment
mean opinion score
heterogeneous datasets
robustness
preference learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

unified multi-metric learning
preference learning
robust speech quality assessment
partially annotated data
comparative MOS prediction
🔎 Similar Papers
Wei Wang
Wei Wang
Shanghai Jiao Tong University
speech recognitionspeech enhancementtext-to-speech
Wangyou Zhang
Wangyou Zhang
Assistant Professor, School of Artificial Intelligence, Shanghai Jiao Tong University
Speech Separation and EnhancementRobust Speech RecognitionSpeech Representation Learning
Chenda Li
Chenda Li
Shanghai Jiao Tong University
Speech Separation
J
Jiahe Wang
Shanghai Jiao Tong University, China
Samuele Cornell
Samuele Cornell
Carnegie Mellon University, Language Technologies Institute
Speech ProcessingMachine Learning
Marvin Sach
Marvin Sach
Technische Universität Braunschweig
Machine LearningSpeech EnhancementNoise Suppression
Kohei Saijo
Kohei Saijo
Waseda University
Audio Source Separation
Yihui Fu
Yihui Fu
Technische Universität Braunschweig
Speech Processing
Zhaoheng Ni
Zhaoheng Ni
Meta Reality Labs
Speech EnhancementGenerative ModelingNatural Language Processing
Bing Han
Bing Han
Shanghai Jiao Tong University
Speaker VerificationSound AnalysisSpeech SynthesisAnomalous Sound Detection
Xun Gong
Xun Gong
Shanghai Jiao Tong University
speech recognitionspeech llmspeech understanding
Mengxiao Bi
Mengxiao Bi
Fuxi AI Lab, NetEase Inc.
Deep Learning
Tim Fingscheidt
Tim Fingscheidt
Professor, IEEE Fellow, ITG Fellow, Technische Universität Braunschweig, Germany
Speech EnhancementAcoustic Signal ProcessingSpeech ProcessingEnvironment PerceptionNLP
Shinji Watanabe
Shinji Watanabe
Carnegie Mellon University
Speech recognitionSpeech processingSpeech enhancementSpeech translation
Yanmin Qian
Yanmin Qian
Professor, Shanghai Jiao Tong University
Speech and Language ProcessingSignal ProcessingMachine Learning