🤖 AI Summary
This paper addresses two key limitations of automatic speech quality assessment (ASQA): weak sentence-level prediction performance and insufficient generalization across multiple granularities. To this end, we propose a novel multi-granularity evaluation framework that integrates self-supervised speech representations (wav2vec 2.0) with a Mixture-of-Experts (MoE) classification head. Methodologically, we design a task-aware MoE classifier and augment training with a large-scale synthetic speech dataset generated by diverse commercial text-to-speech models to enhance fine-grained modeling. Our contributions are threefold: (1) We systematically identify and characterize the fundamental bottlenecks of existing ASQA methods in sentence-level assessment; (2) We introduce a scalable MoE architecture that significantly improves system-level performance while enabling interpretable failure analysis and targeted improvement pathways for utterance-level evaluation; (3) We publicly release both a high-quality synthetic dataset and the trained models to advance research in multi-granularity speech quality assessment.
📝 Abstract
Automatic speech quality assessment plays a crucial role in the development of speech synthesis systems, but existing models exhibit significant performance variations across different granularity levels of prediction tasks. This paper proposes an enhanced MOS prediction system based on self-supervised learning speech models, incorporating a Mixture of Experts (MoE) classification head and utilizing synthetic data from multiple commercial generation models for data augmentation. Our method builds upon existing self-supervised models such as wav2vec2, designing a specialized MoE architecture to address different types of speech quality assessment tasks. We also collected a large-scale synthetic speech dataset encompassing the latest text-to-speech, speech conversion, and speech enhancement systems. However, despite the adoption of the MoE architecture and expanded dataset, the model's performance improvements in sentence-level prediction tasks remain limited. Our work reveals the limitations of current methods in handling sentence-level quality assessment, provides new technical pathways for the field of automatic speech quality assessment, and also delves into the fundamental causes of performance differences across different assessment granularities.