Differentiable Reward Optimization for LLM based TTS system

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address bottlenecks in neural codec-based TTS systems—particularly limited phoneme accuracy and insufficient controllability over emotion and voice quality—this paper proposes a differentiable reward optimization framework. Methodologically, it introduces (1) a differentiable multi-task reward model directly built upon neural codec tokens, enabling zero-shot control of prosodic and timbral attributes (e.g., emotion, voice quality); and (2) end-to-end reinforcement learning training via Gumbel-Softmax reparameterization to circumvent non-differentiability from discrete token sampling—eliminating the need for policy gradient estimation and substantially simplifying optimization. Evaluated on the SEED-TTS-Eval benchmark, the method achieves state-of-the-art word error rate (WER), significantly improves phoneme accuracy, and synthesizes high-fidelity, zero-shot emotional speech with superior naturalness and expressiveness.

Technology Category

Application Category

📝 Abstract
This paper proposes a novel Differentiable Reward Optimization (DiffRO) method aimed at enhancing the performance of neural codec language models based text-to-speech (TTS) systems. In contrast to conventional reinforcement learning from human feedback (RLHF) approaches applied to TTS, DiffRO directly compute the rewards based on neural codec tokens, rather than relying on synthesized audio. Furthermore, we employ the Gumbel-Softmax technique to render the reward function differentiable, thereby streamlining the RLHF training process. Additionally, we introduce a multi-task reward (MTR) model which can provide feedback from different perspectives and find that it can augment the system's capability to follow instructions effectively.Experimental results indicate that DiffRO significantly improves the pronunciation accuracy of the TTS system, achieving state-of-the-art (SOTA) WER results on the seed-tts-eval benchmark. Moreover, with the integration of the MTR model, we demonstrate the ability to control emotional and quality attributes in a zero-shot manner.
Problem

Research questions and friction points this paper is trying to address.

Enhance TTS performance using Differentiable Reward Optimization
Improve pronunciation accuracy with neural codec token rewards
Control emotional and quality attributes via multi-task rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable Reward Optimization for TTS
Gumbel-Softmax enables differentiable rewards
Multi-task reward model enhances instruction following
🔎 Similar Papers
No similar papers found.