🤖 AI Summary
To address copyright protection for speech datasets, this paper proposes Cluster-guided Backdoor Watermarking (CBW), a black-box watermark verification method tailored for speaker verification models. CBW uniquely couples sample feature similarity with trigger clustering, enabling cluster-guided trigger assignment and feature-space-aligned watermark embedding. This design yields a statistically verifiable ownership authentication framework robust against adaptive attacks—including pruning, fine-tuning, and knowledge distillation. Evaluated on mainstream speaker verification benchmarks, CBW achieves >99% watermark detection accuracy while degrading primary task performance by less than 0.5%, significantly outperforming existing approaches. To the best of our knowledge, this work establishes the first efficient, robust, and low-overhead black-box verification paradigm for auditing intellectual property rights of speech datasets.
📝 Abstract
With the increasing adoption of deep learning in speaker verification, large-scale speech datasets have become valuable intellectual property. To audit and prevent the unauthorized usage of these valuable released datasets, especially in commercial or open-source scenarios, we propose a novel dataset ownership verification method. Our approach introduces a clustering-based backdoor watermark (CBW), enabling dataset owners to determine whether a suspicious third-party model has been trained on a protected dataset under a black-box setting. The CBW method consists of two key stages: dataset watermarking and ownership verification. During watermarking, we implant multiple trigger patterns in the dataset to make similar samples (measured by their feature similarities) close to the same trigger while dissimilar samples are near different triggers. This ensures that any model trained on the watermarked dataset exhibits specific misclassification behaviors when exposed to trigger-embedded inputs. To verify dataset ownership, we design a hypothesis-test-based framework that statistically evaluates whether a suspicious model exhibits the expected backdoor behavior. We conduct extensive experiments on benchmark datasets, verifying the effectiveness and robustness of our method against potential adaptive attacks. The code for reproducing main experiments is available at https://github.com/Radiant0726/CBW