Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two key limitations of automatic speech quality assessment (ASQA): weak sentence-level prediction performance and insufficient generalization across multiple granularities. To this end, we propose a novel multi-granularity evaluation framework that integrates self-supervised speech representations (wav2vec 2.0) with a Mixture-of-Experts (MoE) classification head. Methodologically, we design a task-aware MoE classifier and augment training with a large-scale synthetic speech dataset generated by diverse commercial text-to-speech models to enhance fine-grained modeling. Our contributions are threefold: (1) We systematically identify and characterize the fundamental bottlenecks of existing ASQA methods in sentence-level assessment; (2) We introduce a scalable MoE architecture that significantly improves system-level performance while enabling interpretable failure analysis and targeted improvement pathways for utterance-level evaluation; (3) We publicly release both a high-quality synthetic dataset and the trained models to advance research in multi-granularity speech quality assessment.

Technology Category

Application Category

📝 Abstract
Automatic speech quality assessment plays a crucial role in the development of speech synthesis systems, but existing models exhibit significant performance variations across different granularity levels of prediction tasks. This paper proposes an enhanced MOS prediction system based on self-supervised learning speech models, incorporating a Mixture of Experts (MoE) classification head and utilizing synthetic data from multiple commercial generation models for data augmentation. Our method builds upon existing self-supervised models such as wav2vec2, designing a specialized MoE architecture to address different types of speech quality assessment tasks. We also collected a large-scale synthetic speech dataset encompassing the latest text-to-speech, speech conversion, and speech enhancement systems. However, despite the adoption of the MoE architecture and expanded dataset, the model's performance improvements in sentence-level prediction tasks remain limited. Our work reveals the limitations of current methods in handling sentence-level quality assessment, provides new technical pathways for the field of automatic speech quality assessment, and also delves into the fundamental causes of performance differences across different assessment granularities.
Problem

Research questions and friction points this paper is trying to address.

Enhance speech quality assessment across granularity levels
Address limitations in sentence-level prediction performance
Analyze causes of performance differences in assessment tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts enhances MOS prediction
Self-supervised learning with wav2vec2 foundation
Large synthetic dataset from TTS systems
🔎 Similar Papers
X
Xintong Hu
College of CS, Zhejiang University, Hangzhou, CHINA
Yixuan Chen
Yixuan Chen
Oxford Suzhou Center for Advanced Research
DisentanglementVision-Language ModelAI for Medical
R
Rui Yang
College of CS, Zhejiang University, Hangzhou, CHINA
W
Wenxiang Guo
College of CS, Zhejiang University, Hangzhou, CHINA
Changhao Pan
Changhao Pan
Zhejiang University
Multi-Modal Genarative AISinging Voice Synthesis