Vox-Evaluator: Enhancing Stability and Fidelity for Zero-shot TTS with A Multi-Level Evaluator

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Zero-shot text-to-speech (TTS) systems suffer from instability and fidelity degradation, including pronunciation errors, noise artifacts, and reduced speech quality. To address these issues, we propose Vox-Evaluator, a multi-level evaluation-and-repair framework. Its core contributions are threefold: (1) a fine-grained speech error annotation dataset enabling precise error localization and end-to-end quality assessment; (2) a unified architecture integrating large language models, diffusion models, and mask generation for automatic error detection, localized speech masking, and conditional regeneration; and (3) a preference-alignment optimization mechanism to enhance phonemic accuracy and prosodic naturalness. Experiments demonstrate that Vox-Evaluator significantly reduces mispronunciation rates and audio distortion, achieving a mean opinion score (MOS) improvement of over 0.8. It consistently enhances both stability and fidelity across multiple zero-shot TTS benchmarks.

Technology Category

Application Category

📝 Abstract
Recent advances in zero-shot text-to-speech (TTS), driven by language models, diffusion models and masked generation, have achieved impressive naturalness in speech synthesis. Nevertheless, stability and fidelity remain key challenges, manifesting as mispronunciations, audible noise, and quality degradation. To address these issues, we introduce Vox-Evaluator, a multi-level evaluator designed to guide the correction of erroneous speech segments and preference alignment for TTS systems. It is capable of identifying the temporal boundaries of erroneous segments and providing a holistic quality assessment of the generated speech. Specifically, to refine erroneous segments and enhance the robustness of the zero-shot TTS model, we propose to automatically identify acoustic errors with the evaluator, mask the erroneous segments, and finally regenerate speech conditioning on the correct portions. In addition, the fine-gained information obtained from Vox-Evaluator can guide the preference alignment for TTS model, thereby reducing the bad cases in speech synthesis. Due to the lack of suitable training datasets for the Vox-Evaluator, we also constructed a synthesized text-speech dataset annotated with fine-grained pronunciation errors or audio quality issues. The experimental results demonstrate the effectiveness of the proposed Vox-Evaluator in enhancing the stability and fidelity of TTS systems through the speech correction mechanism and preference optimization. The demos are shown.
Problem

Research questions and friction points this paper is trying to address.

Improving zero-shot TTS stability by correcting erroneous speech segments
Enhancing speech fidelity through multi-level quality assessment
Reducing mispronunciations and audio degradation in synthesized speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level evaluator identifies erroneous speech segments boundaries
Mask and regenerate erroneous segments using evaluator guidance
Preference alignment optimization reduces bad cases in synthesis
🔎 Similar Papers
No similar papers found.
H
Hualei Wang
Tencent AI Lab, ShenZhen
N
Na Li
Tencent AI Lab, ShenZhen
C
Chuke Wang
Tencent AI Lab, ShenZhen
S
Shu Wu
Tencent AI Lab, ShenZhen
Zhifeng Li
Zhifeng Li
Tencent
computer visionpattern recognitionwith a recent focus on AIGC
D
Dong Yu
Tencent AI Lab, Seattle