🤖 AI Summary
In zero-shot text-to-speech (TTS), local audio artifacts—such as distortion and repetition—are difficult to correct effectively using conventional sentence-level preference optimization. To address this, we propose Fine-grained Preference Optimization (FPO), the first preference-based method operating at the audio segment level rather than the sentence level. FPO incorporates human-annotated local defect categories and introduces a defect-type-adaptive weighted loss function to enable selective gradient updates. Integrated into a zero-shot TTS fine-tuning framework, FPO significantly enhances the model’s ability to localize and rectify fine-grained artifacts: the proportion of degraded samples drops substantially, while speech intelligibility and naturalness improve markedly. Moreover, FPO achieves comparable or superior performance to baseline methods using fewer training samples, demonstrating high data efficiency and strong robustness against diverse artifact types.
📝 Abstract
Integrating human feedback to align text-to-speech (TTS) system outputs with human preferences has proven to be an effective approach for enhancing the robustness of language model-based TTS systems. Current approaches primarily focus on using preference data annotated at the utterance level. However, frequent issues that affect the listening experience often only arise in specific segments of audio samples, while other segments are well-generated. In this study, we propose a fine-grained preference optimization approach (FPO) to enhance the robustness of TTS systems. FPO focuses on addressing localized issues in generated samples rather than uniformly optimizing the entire utterance. Specifically, we first analyze the types of issues in generated samples, categorize them into two groups, and propose a selective training loss strategy to optimize preferences based on fine-grained labels for each issue type. Experimental results show that FPO enhances the robustness of zero-shot TTS systems by effectively addressing local issues, significantly reducing the bad case ratio, and improving intelligibility. Furthermore, FPO exhibits superior data efficiency compared with baseline systems, achieving similar performance with fewer training samples.