TriniMark: A Robust Generative Speech Watermarking Method for Trinity-Level Attribution

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the escalating deepfake risks and lack of copyright traceability in diffusion-based speech synthesis, this paper proposes the first end-to-end robust watermarking framework tailored for speech diffusion models, enabling attributable authentication across generated speech, the generative model itself, and its user. We introduce a novel ternary attribution watermarking paradigm, design a lightweight time-domain encoder-decoder, and propose a waveform-guided fine-tuning strategy for diffusion models—significantly enhancing watermark transferability and extractability under model distillation and surrogate training. Under diverse adversarial attacks—including resampling, compression, ASR/TTS transcription, and model stealing—the framework achieves >96% bit accuracy, outperforming state-of-the-art methods. Notably, it is the first to enable end-to-end watermark penetration and traceability throughout the entire surrogate model training process.

Technology Category

Application Category

📝 Abstract
The emergence of diffusion models has facilitated the generation of speech with reinforced fidelity and naturalness. While deepfake detection technologies have manifested the ability to identify AI-generated content, their efficacy decreases as generative models become increasingly sophisticated. Furthermore, current research in the field has not adequately addressed the necessity for robust watermarking to safeguard the intellectual property rights associated with synthetic speech and generative models. To remedy this deficiency, we propose a extbf{ro}bust generative extbf{s}peech wat extbf{e}rmarking method (TriniMark) for authenticating the generated content and safeguarding the copyrights by enabling the traceability of the diffusion model. We first design a structure-lightweight watermark encoder that embeds watermarks into the time-domain features of speech and reconstructs the waveform directly. A temporal-aware gated convolutional network is meticulously designed in the watermark decoder for bit-wise watermark recovery. Subsequently, the waveform-guided fine-tuning strategy is proposed for fine-tuning the diffusion model, which leverages the transferability of watermarks and enables the diffusion model to incorporate watermark knowledge effectively. When an attacker trains a surrogate model using the outputs of the target model, the embedded watermark can still be learned by the surrogate model and correctly extracted. Comparative experiments with state-of-the-art methods demonstrate the superior robustness of our method, particularly in countering compound attacks.
Problem

Research questions and friction points this paper is trying to address.

Robust watermarking for synthetic speech copyright protection
Traceability of diffusion models in generated content
Countering compound attacks on speech watermarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight watermark encoder embeds time-domain features
Temporal-aware gated CNN decoder recovers watermarks
Waveform-guided fine-tuning transfers watermark knowledge
🔎 Similar Papers
No similar papers found.
Y
Yue Li
College of Computer Science and Technology, National Huaqiao University, Xiamen 361021, China, and also with the Xiamen Key Laboratory of Data Security and Blockchain Technology, Xiamen 361021, China
Weizhi Liu
Weizhi Liu
华东师范大学
AIGC securityGenerative watermarking
D
Dongdong Lin