π€ AI Summary
In non-parallel voice conversion (VC), training-inference mismatch, pitch inaccuracy, and insufficient speaker style transfer arise due to the absence of ground-truth target speech. To address these issues, this paper proposes VoiceCFMβthe first method integrating cycle consistency into the conditional flow matching (CFM) framework. Its core innovation is a dual-stream CFM architecture: VoiceCFM models timbre transfer, while PitchCFM independently handles pitch adaptation, enabling acoustic feature disentanglement. Trained solely on non-parallel data, VoiceCFM jointly enforces cycle consistency and leverages a two-branch generation mechanism. Experiments demonstrate substantial improvements in speaker similarity and speech naturalness, effectively mitigating post-conversion voice hoarseness. VoiceCFM achieves superior performance over state-of-the-art non-parallel VC methods in both objective metrics (e.g., COSYNE, F0 RMSE) and subjective MOS scores.
π Abstract
Voice Conversion (VC) aims to convert the style of a source speaker, such as timbre and pitch, to the style of any target speaker while preserving the linguistic content. However, the ground truth of the converted speech does not exist in a non-parallel VC scenario, which induces the train-inference mismatch problem. Moreover, existing methods still have an inaccurate pitch and low speaker adaptation quality, there is a significant disparity in pitch between the source and target speaker style domains. As a result, the models tend to generate speech with hoarseness, posing challenges in achieving high-quality voice conversion. In this study, we propose CycleFlow, a novel VC approach that leverages cycle consistency in conditional flow matching (CFM) for speaker timbre adaptation training on non-parallel data. Furthermore, we design a Dual-CFM based on VoiceCFM and PitchCFM to generate speech and improve speaker pitch adaptation quality. Experiments show that our method can significantly improve speaker similarity, generating natural and higher-quality speech.