🤖 AI Summary
Diffusion models (DMs) are often trained on copyrighted images without authorization, yet existing methods lack reliable, scalable mechanisms for dataset-level copyright attribution and provenance tracing.
Method: We propose the first dataset-level copyright identification framework for DMs, departing from conventional single-sample membership inference attacks (MIAs). Our approach integrates multiple heterogeneous signals—enhanced MIA outputs, handcrafted statistical features of image collections, a lightweight scoring model, and Bootstrap-based significance testing—to construct a statistically rigorous, verifiable inference paradigm.
Contribution/Results: Using only 70 publicly available images, our method reliably determines whether a target DM was trained on a specific copyrighted image dataset at >99% confidence. It achieves high accuracy, strong interpretability, and low computational overhead on state-of-the-art large-scale DMs—overcoming critical reliability limitations of prior MIAs in practical deployment scenarios.
📝 Abstract
Diffusion Models (DMs) benefit from large and diverse datasets for their training. Since this data is often scraped from the Internet without permission from the data owners, this raises concerns about copyright and intellectual property protections. While (illicit) use of data is easily detected for training samples perfectly re-created by a DM at inference time, it is much harder for data owners to verify if their data was used for training when the outputs from the suspect DM are not close replicas. Conceptually, membership inference attacks (MIAs), which detect if a given data point was used during training, present themselves as a suitable tool to address this challenge. However, we demonstrate that existing MIAs are not strong enough to reliably determine the membership of individual images in large, state-of-the-art DMs. To overcome this limitation, we propose CDI, a framework for data owners to identify whether their dataset was used to train a given DM. CDI relies on dataset inference techniques, i.e., instead of using the membership signal from a single data point, CDI leverages the fact that most data owners, such as providers of stock photography, visual media companies, or even individual artists, own datasets with multiple publicly exposed data points which might all be included in the training of a given DM. By selectively aggregating signals from existing MIAs and using new handcrafted methods to extract features for these datasets, feeding them to a scoring model, and applying rigorous statistical testing, CDI allows data owners with as little as 70 data points to identify with a confidence of more than 99% whether their data was used to train a given DM. Thereby, CDI represents a valuable tool for data owners to claim illegitimate use of their copyrighted data.