🤖 AI Summary
In one-shot voice conversion, the absence of parallel data causes a dual input mismatch between content and speaker encoders during training versus inference. To address this, we propose Pseudo Conversion and Speaker Sampling—novel mechanisms that systematically calibrate domain shifts between the two encoders without any genuine paired data. Our approach leverages a pretrained voice conversion (VC) model and integrates three key components: information perturbation, intra-speaker cross-sentence sample replacement, and end-to-end waveform reconstruction. Experiments demonstrate that our Pseudo Conversion strategy significantly outperforms existing information perturbation methods. The resulting PseudoVC model achieves state-of-the-art performance across multiple benchmarks, surpassing all publicly available SOTA approaches. Audio samples and implementation details are open-sourced to facilitate reproducibility and further research.
📝 Abstract
As parallel training data is scarce for one-shot voice conversion (VC) tasks, waveform reconstruction is typically performed by various VC systems. A typical one-shot VC system comprises a content encoder and a speaker encoder. However, two types of mismatches arise: one for the inputs to the content encoder during training and inference, and another for the inputs to the speaker encoder. To address these mismatches, we propose a novel VC training method called extit{PseudoVC} in this paper. First, we introduce an innovative information perturbation approach named extit{Pseudo Conversion} to tackle the first mismatch problem. This approach leverages pretrained VC models to convert the source utterance into a perturbed utterance, which is fed into the content encoder during training. Second, we propose an approach termed extit{Speaker Sampling} to resolve the second mismatch problem, which will substitute the input to the speaker encoder by another utterance from the same speaker during training. Experimental results demonstrate that our proposed extit{Pseudo Conversion} outperforms previous information perturbation methods, and the overall extit{PseudoVC} method surpasses publicly available VC models. Audio examples are available.