🤖 AI Summary
Existing watermarking methods for voice cloning suffer from severe watermark loss in zero-shot cloning scenarios, achieving only ~50% detection accuracy. To address this, we propose the first anti-cloning watermarking framework specifically designed for zero-shot voice cloning. Our method embeds watermarks into speaker-specific latent variables—ensuring watermark transferability and robust retention even during untrained cloning. We introduce two key innovations: (i) voice conversion (VC)-guided simulation-based data augmentation to model cloning-induced distortions, and (ii) voice activity detection (VAD)-aware loss optimization to preserve watermark integrity during speech-active segments. The framework comprises three core components: speaker-specific latent space modeling, cloning-process simulation for robust data augmentation, and VAD-guided loss design. Evaluated across multiple state-of-the-art zero-shot voice cloning models, our approach achieves >95% watermark detection accuracy—significantly outperforming prior methods—and establishes the first solution enabling highly robust and accurate watermark embedding and detection in zero-shot settings.
📝 Abstract
Voice cloning (VC)-resistant watermarking is an emerging technique for tracing and preventing unauthorized cloning. Existing methods effectively trace traditional VC models by training them on watermarked audio but fail in zero-shot VC scenarios, where models synthesize audio from an audio prompt without training. To address this, we propose VoiceMark, the first zero-shot VC-resistant watermarking method that leverages speaker-specific latents as the watermark carrier, allowing the watermark to transfer through the zero-shot VC process into the synthesized audio. Additionally, we introduce VC-simulated augmentations and VAD-based loss to enhance robustness against distortions. Experiments on multiple zero-shot VC models demonstrate that VoiceMark achieves over 95% accuracy in watermark detection after zero-shot VC synthesis, significantly outperforming existing methods, which only reach around 50%. See our code and demos at: https://huggingface.co/spaces/haiyunli/VoiceMark