🤖 AI Summary
Existing voice cloning methods rely heavily on labeled data and struggle to disentangle timbre, speaking style, and linguistic content in zero-shot settings, limiting controllable speech generation. This paper proposes the first fully self-supervised, progressive disentanglement framework: it leverages HuBERT features and a VQ-VAE information bottleneck to ensure representation separability; models joint content-style representations via an autoregressive Transformer; and reconstructs acoustic waveforms using a Flow-Matching Transformer. The entire pipeline is trained end-to-end without any annotations on 60K hours of audiobook data. To our knowledge, this is the first method achieving independent zero-shot control over timbre and speaking style. It matches or surpasses state-of-the-art performance in accent and emotion conversion, while unifying zero-shot voice conversion and end-to-end text-to-speech within a single architecture.
📝 Abstract
The imitation of voice, targeted on specific speech attributes such as timbre and speaking style, is crucial in speech generation. However, existing methods rely heavily on annotated data, and struggle with effectively disentangling timbre and style, leading to challenges in achieving controllable generation, especially in zero-shot scenarios. To address these issues, we propose Vevo, a versatile zero-shot voice imitation framework with controllable timbre and style. Vevo operates in two core stages: (1) Content-Style Modeling: Given either text or speech's content tokens as input, we utilize an autoregressive transformer to generate the content-style tokens, which is prompted by a style reference; (2) Acoustic Modeling: Given the content-style tokens as input, we employ a flow-matching transformer to produce acoustic representations, which is prompted by a timbre reference. To obtain the content and content-style tokens of speech, we design a fully self-supervised approach that progressively decouples the timbre, style, and linguistic content of speech. Specifically, we adopt VQ-VAE as the tokenizer for the continuous hidden features of HuBERT. We treat the vocabulary size of the VQ-VAE codebook as the information bottleneck, and adjust it carefully to obtain the disentangled speech representations. Solely self-supervised trained on 60K hours of audiobook speech data, without any fine-tuning on style-specific corpora, Vevo matches or surpasses existing methods in accent and emotion conversion tasks. Additionally, Vevo's effectiveness in zero-shot voice conversion and text-to-speech tasks further demonstrates its strong generalization and versatility. Audio samples are available at https://versavoice.github.io.