Fine-grained Preference Optimization Improves Zero-shot Text-to-Speech

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In zero-shot text-to-speech (TTS), local audio artifacts—such as distortion and repetition—are difficult to correct effectively using conventional sentence-level preference optimization. To address this, we propose Fine-grained Preference Optimization (FPO), the first preference-based method operating at the audio segment level rather than the sentence level. FPO incorporates human-annotated local defect categories and introduces a defect-type-adaptive weighted loss function to enable selective gradient updates. Integrated into a zero-shot TTS fine-tuning framework, FPO significantly enhances the model’s ability to localize and rectify fine-grained artifacts: the proportion of degraded samples drops substantially, while speech intelligibility and naturalness improve markedly. Moreover, FPO achieves comparable or superior performance to baseline methods using fewer training samples, demonstrating high data efficiency and strong robustness against diverse artifact types.

Technology Category

Application Category

📝 Abstract
Integrating human feedback to align text-to-speech (TTS) system outputs with human preferences has proven to be an effective approach for enhancing the robustness of language model-based TTS systems. Current approaches primarily focus on using preference data annotated at the utterance level. However, frequent issues that affect the listening experience often only arise in specific segments of audio samples, while other segments are well-generated. In this study, we propose a fine-grained preference optimization approach (FPO) to enhance the robustness of TTS systems. FPO focuses on addressing localized issues in generated samples rather than uniformly optimizing the entire utterance. Specifically, we first analyze the types of issues in generated samples, categorize them into two groups, and propose a selective training loss strategy to optimize preferences based on fine-grained labels for each issue type. Experimental results show that FPO enhances the robustness of zero-shot TTS systems by effectively addressing local issues, significantly reducing the bad case ratio, and improving intelligibility. Furthermore, FPO exhibits superior data efficiency compared with baseline systems, achieving similar performance with fewer training samples.
Problem

Research questions and friction points this paper is trying to address.

Optimizes text-to-speech system robustness
Addresses localized audio segment issues
Enhances zero-shot TTS system performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained preference optimization
Selective training loss strategy
Localized issue addressing
🔎 Similar Papers
No similar papers found.
J
Jixun Yao
Northwestern Polytechnical University
Yuguang Yang
Yuguang Yang
Microsoft, Amazon Alexa AI, Tsinghua University, Johns Hopkins University
Artificial IntelligenceNatural Language ProcessingStochastic Process & ControlComputational Physics
Y
Yu Pan
Everest Team, Ximalaya
Y
Yuan Feng
Everest Team, Ximalaya
Z
Ziqian Ning
Northwestern Polytechnical University
J
Jianhao Ye
Everest Team, Ximalaya
Hongbin Zhou
Hongbin Zhou
Shanghai AI Laboratory
L
Lei Xie
Northwestern Polytechnical University