🤖 AI Summary
This work addresses copyright compliance of generative model outputs—specifically, under what conditions can such outputs be guaranteed not to infringe copyrights in the training data? Method: We propose the “Liability-Free Replication Protection” framework and its core mechanism, “Clean-Room Replication Protection,” enabling users to mitigate infringement risk through controllable operational practices. To overcome limitations of existing Nearly Access-Free (NAF) criteria—which fail to preclude direct copying and remain vulnerable to data “contamination”—we introduce the first provably sound copyright protection theory, formally linking differential privacy (DP) to copyright safety. Contribution/Results: Leveraging a formal security definition, counterfactual analysis, and the clean-room de-duplication gold dataset assumption, we rigorously prove that DP-compliant models achieve Clean-Room Replication Protection on the gold dataset. This provides generative AI with a legally defensible, theoretically grounded pathway to copyright compliance.
📝 Abstract
Are there any conditions under which a generative model's outputs are guaranteed not to infringe the copyrights of its training data? This is the question of "provable copyright protection" first posed by Vyas, Kakade, and Barak (ICML 2023). They define near access-freeness (NAF) and propose it as sufficient for protection. This paper revisits the question and establishes new foundations for provable copyright protection -- foundations that are firmer both technically and legally. First, we show that NAF alone does not prevent infringement. In fact, NAF models can enable verbatim copying, a blatant failure of copy protection that we dub being tainted. Then, we introduce our blameless copy protection framework for defining meaningful guarantees, and instantiate it with clean-room copy protection. Clean-room copy protection allows a user to control their risk of copying by behaving in a way that is unlikely to copy in a counterfactual clean-room setting. Finally, we formalize a common intuition about differential privacy and copyright by proving that DP implies clean-room copy protection when the dataset is golden, a copyright deduplication requirement.