🤖 AI Summary
This work addresses the challenge in zero-shot speech synthesis where existing models overly rely on the speaking style of reference audio, leading to poor controllability under limited or mismatched reference conditions. To overcome this, the authors propose a novel framework integrating decoupled classifier-free guidance (DCFG), style-specific LoRA modules, orthogonal LoRA fusion, and speaker timbre consistency optimization. This approach enables, for the first time, continuous, relative, and disentangled control over multiple stylistic attributes—such as pitch, energy, and diverse emotional expressions—while preserving speech intelligibility and speaker identity. The method demonstrates strong robustness even when the reference audio exhibits stylistic mismatches with the target utterance, significantly advancing fine-grained and reliable style control in zero-shot settings.
📝 Abstract
Zero-shot text-to-speech models can clone a speaker's timbre from a short reference audio, but they also strongly inherit the speaking style present in the reference. As a result, synthesizing speech with a desired style often requires carefully selecting reference audio, which is impractical when only limited or mismatched references are available. While recent controllable TTS methods attempt to address this issue, they typically rely on absolute style targets and discrete textual prompts, and therefore do not support continuous and reference-relative style control. We propose ReStyle-TTS, a framework that enables continuous and reference-relative style control in zero-shot TTS. Our key insight is that effective style control requires first reducing the model's implicit dependence on reference style before introducing explicit control mechanisms. To this end, we introduce Decoupled Classifier-Free Guidance (DCFG), which independently controls text and reference guidance, reducing reliance on reference style while preserving text fidelity. On top of this, we apply style-specific LoRAs together with Orthogonal LoRA Fusion to enable continuous and disentangled multi-attribute control, and introduce a Timbre Consistency Optimization module to mitigate timbre drift caused by weakened reference guidance. Experiments show that ReStyle-TTS enables user-friendly, continuous, and relative control over pitch, energy, and multiple emotions while maintaining intelligibility and speaker timbre, and performs robustly in challenging mismatched reference-target style scenarios.