VoiceMark: Zero-Shot Voice Cloning-Resistant Watermarking Approach Leveraging Speaker-Specific Latents

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing watermarking methods for voice cloning suffer from severe watermark loss in zero-shot cloning scenarios, achieving only ~50% detection accuracy. To address this, we propose the first anti-cloning watermarking framework specifically designed for zero-shot voice cloning. Our method embeds watermarks into speaker-specific latent variables—ensuring watermark transferability and robust retention even during untrained cloning. We introduce two key innovations: (i) voice conversion (VC)-guided simulation-based data augmentation to model cloning-induced distortions, and (ii) voice activity detection (VAD)-aware loss optimization to preserve watermark integrity during speech-active segments. The framework comprises three core components: speaker-specific latent space modeling, cloning-process simulation for robust data augmentation, and VAD-guided loss design. Evaluated across multiple state-of-the-art zero-shot voice cloning models, our approach achieves >95% watermark detection accuracy—significantly outperforming prior methods—and establishes the first solution enabling highly robust and accurate watermark embedding and detection in zero-shot settings.

Technology Category

Application Category

📝 Abstract
Voice cloning (VC)-resistant watermarking is an emerging technique for tracing and preventing unauthorized cloning. Existing methods effectively trace traditional VC models by training them on watermarked audio but fail in zero-shot VC scenarios, where models synthesize audio from an audio prompt without training. To address this, we propose VoiceMark, the first zero-shot VC-resistant watermarking method that leverages speaker-specific latents as the watermark carrier, allowing the watermark to transfer through the zero-shot VC process into the synthesized audio. Additionally, we introduce VC-simulated augmentations and VAD-based loss to enhance robustness against distortions. Experiments on multiple zero-shot VC models demonstrate that VoiceMark achieves over 95% accuracy in watermark detection after zero-shot VC synthesis, significantly outperforming existing methods, which only reach around 50%. See our code and demos at: https://huggingface.co/spaces/haiyunli/VoiceMark
Problem

Research questions and friction points this paper is trying to address.

Prevents unauthorized voice cloning with watermarking
Addresses zero-shot VC scenarios lacking training data
Ensures watermark transfer to synthesized audio
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages speaker-specific latents as watermark carrier
Introduces VC-simulated augmentations for robustness
Uses VAD-based loss to enhance distortion resistance
H
Haiyun Li
Shenzhen International Graduate School, Tsinghua University, China; Pengcheng Laboratory, China
Z
Zhiyong Wu
Shenzhen International Graduate School, Tsinghua University, China; Pengcheng Laboratory, China
X
Xiaofeng Xie
Independent Researcher, China
J
Jingran Xie
Shenzhen International Graduate School, Tsinghua University, China
Yaoxun Xu
Yaoxun Xu
Tsinghua University
Hanyang Peng
Hanyang Peng
Peng Cheng Laboratory
Deep LearningOptimization