🤖 AI Summary
This work addresses the challenge of zero-shot cross-lingual singing voice conversion—transferring singing voices across languages without parallel target-language data or speaker-specific fine-tuning. We propose the first method capable of high-fidelity, language-agnostic singing voice conversion to an arbitrary target singer. Our approach extends the VITS architecture with three key innovations: (1) learnable multilingual linguistic embeddings that explicitly decouple language identity from timbral characteristics; (2) speaker-independent clustering (SPIN) for robust prosodic and phonemic representation; and (3) an ECAPA-TDNN-based speaker encoder (ECAPA2) enhanced for fine-grained timbre modeling. Evaluated on a multilingual singing dataset, our method achieves significant improvements in naturalness, timbre fidelity, and phonetic accuracy over prior approaches. It is the first to realize truly zero-shot cross-lingual singing voice conversion. Code and pretrained models are publicly available.
📝 Abstract
This work presents FreeSVC, a promising multilingual singing voice conversion approach that leverages an enhanced VITS model with Speaker-invariant Clustering (SPIN) for better content representation and the State-of-the-Art (SOTA) speaker encoder ECAPA2. FreeSVC incorporates trainable language embeddings to handle multiple languages and employs an advanced speaker encoder to disentangle speaker characteristics from linguistic content. Designed for zero-shot learning, FreeSVC enables cross-lingual singing voice conversion without extensive language-specific training. We demonstrate that a multilingual content extractor is crucial for optimal cross-language conversion. Our source code and models are publicly available.