APG-MOS: Auditory Perception Guided-MOS Predictor for Synthetic Speech

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automatic speech quality assessment models neglect human auditory perception mechanisms, resulting in suboptimal correlation with subjective Mean Opinion Score (MOS) ratings. To address this, we propose Auditory Perception-Guided MOS prediction (APG-MOS), the first model integrating biologically inspired cochlear encoding with Residual Vector Quantization (RVQ)-based semantic distortion modeling. APG-MOS introduces a residual cross-modal attention fusion architecture and a multi-stage progressive learning strategy to jointly model auditory perception and semantic distortion. Evaluated on two major benchmarks—VCC2018 and DNS-Challenge—APG-MOS achieves significant improvements over state-of-the-art methods, markedly enhancing correlation with human MOS ratings. The model architecture, training code, and pre-trained weights will be publicly released.

Technology Category

Application Category

📝 Abstract
Automatic speech quality assessment aims to quantify subjective human perception of speech through computational models to reduce the need for labor-consuming manual evaluations. While models based on deep learning have achieved progress in predicting mean opinion scores (MOS) to assess synthetic speech, the neglect of fundamental auditory perception mechanisms limits consistency with human judgments. To address this issue, we propose an auditory perception guided-MOS prediction model (APG-MOS) that synergistically integrates auditory modeling with semantic analysis to enhance consistency with human judgments. Specifically, we first design a perceptual module, grounded in biological auditory mechanisms, to simulate cochlear functions, which encodes acoustic signals into biologically aligned electrochemical representations. Secondly, we propose a residual vector quantization (RVQ)-based semantic distortion modeling method to quantify the degradation of speech quality at the semantic level. Finally, we design a residual cross-attention architecture, coupled with a progressive learning strategy, to enable multimodal fusion of encoded electrochemical signals and semantic representations. Experiments demonstrate that APG-MOS achieves superior performance on two primary benchmarks. Our code and checkpoint will be available on a public repository upon publication.
Problem

Research questions and friction points this paper is trying to address.

Improving synthetic speech quality assessment using auditory perception
Enhancing MOS prediction consistency with human judgments
Integrating biological auditory modeling with semantic analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates cochlear functions using biological auditory mechanisms
Quantifies speech quality degradation via RVQ semantic distortion modeling
Fuses multimodal signals with residual cross-attention architecture
🔎 Similar Papers
No similar papers found.
Zhicheng Lian
Zhicheng Lian
Beijing Normal University
Speech AssessmentMusic Information RetrievalAudio Processing
L
Lizhi Wang
Beijing Normal University, Beijing, China
H
Hua Huang
Beijing Normal University, Beijing, China