On the Evidentiary Limits of Membership Inference for Copyright Auditing

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the reliability of membership inference attacks (MIAs) as admissible evidence in adversarial copyright auditing. To address potential defenses wherein model developers apply semantic-preserving yet structurally perturbing transformations to training data, we formalize—for the first time—a tripartite communication protocol among a judge, prosecutor, and defendant to simulate real-world copyright disputes. We further propose SAGE, a structure-aware rewriting framework based on sparse autoencoders (SAEs), which generates text with preserved semantics but altered lexical structures. Experimental results demonstrate that fine-tuning models on SAGE-rewritten data significantly degrades the performance of existing MIAs, revealing their high sensitivity to semantic-preserving transformations and limited capacity to independently substantiate copyright claims in adversarial settings. This work underscores the practical limitations of MIAs in copyright auditing and establishes a new paradigm for evaluating their robustness.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.
Problem

Research questions and friction points this paper is trying to address.

membership inference
copyright auditing
large language models
adversarial setting
training data obfuscation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Membership Inference Attacks
Copyright Auditing
Sparse Autoencoders
Adversarial Paraphrasing
LLM Training Data
🔎 Similar Papers
No similar papers found.