🤖 AI Summary
This work addresses the limited generalization of existing speech quality assessment models, which rely heavily on scarce human-annotated Mean Opinion Score (MOS) data and struggle in cross-dataset scenarios. To overcome this, we propose UrgentMOS, a novel framework that unifies heterogeneous supervision signals—including absolute MOS scores and pairwise comparative MOS (CMOS) preferences—for the first time, enabling training even when only arbitrary subsets of these metrics are available. By integrating multi-task learning, heterogeneous supervision fusion, and explicit preference modeling, UrgentMOS effectively leverages partially labeled data to significantly enhance cross-dataset robustness. Extensive evaluations across multiple speech quality benchmarks demonstrate that UrgentMOS achieves state-of-the-art performance in both absolute and relative scoring tasks, consistently outperforming current methods.
📝 Abstract
Automatic speech quality assessment has become increasingly important as modern speech generation systems continue to advance, while human listening tests remain costly, time-consuming, and difficult to scale. Most existing learning-based assessment models rely primarily on scarce human-annotated mean opinion score (MOS) data, which limits robustness and generalization, especially when training across heterogeneous datasets. In this work, we propose UrgentMOS, a unified speech quality assessment framework that jointly learns from diverse objective and perceptual quality metrics, while explicitly tolerating the absence of arbitrary subsets of metrics during training. By leveraging complementary quality facets under heterogeneous supervision, UrgentMOS enables effective utilization of partially annotated data and improves robustness when trained on large-scale, multi-source datasets. Beyond absolute score prediction, UrgentMOS explicitly models pairwise quality preferences by directly predicting comparative MOS (CMOS), making it well suited for preference-based evaluation scenarios commonly adopted in system benchmarking. Extensive experiments across a wide range of speech quality datasets, including simulated distortions, speech enhancement, and speech synthesis, demonstrate that UrgentMOS consistently achieves state-of-the-art performance in both absolute and comparative evaluation settings.