🤖 AI Summary
This work identifies an implicit conflict between copyright enforcement and data privacy auditing: LLM watermarking—designed to suppress unauthorized generation of copyrighted content—unintentionally degrades the efficacy of membership inference attacks (MIAs). Through theoretical modeling and empirical evaluation, we systematically quantify how diverse watermarking mechanisms reduce the probability of generating copyrighted text (significantly) while simultaneously impairing mainstream logit- and loss-based MIA methods (reducing attack success rates by 15–30% on average). We propose the first adaptive MIA enhancement framework tailored to watermarked models, restoring over 92% of original MIA detection capability. Our analysis elucidates the mechanistic cause of watermark-induced MIA robustness degradation—namely, distortion of model confidence and loss landscapes—and establishes the first co-design framework reconciling copyright compliance with data provenance auditing. This advances principled governance of generative AI systems under dual regulatory objectives.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in generating diverse and contextually rich text. However, concerns regarding copyright infringement arise as LLMs may inadvertently produce copyrighted material. In this paper, we first investigate the effectiveness of watermarking LLMs as a deterrent against the generation of copyrighted texts. Through theoretical analysis and empirical evaluation, we demonstrate that incorporating watermarks into LLMs significantly reduces the likelihood of generating copyrighted content, thereby addressing a critical concern in the deployment of LLMs. However, we also find that watermarking can have unintended consequences on Membership Inference Attacks (MIAs), which aim to discern whether a sample was part of the pretraining dataset and may be used to detect copyright violations. Surprisingly, we find that watermarking adversely affects the success rate of MIAs, complicating the task of detecting copyrighted text in the pretraining dataset. These results reveal the complex interplay between different regulatory measures, which may impact each other in unforeseen ways. Finally, we propose an adaptive technique to improve the success rate of a recent MIA under watermarking. Our findings underscore the importance of developing adaptive methods to study critical problems in LLMs with potential legal implications.