🤖 AI Summary
Large language models (LLMs) may inadvertently reproduce copyrighted content during generation; existing inference-time defenses—relying on external filters or surface-level string matching—struggle to mitigate semantic paraphrasing-based leakage. This work reframes copyright protection as an internal semantic-space control problem within the LLM, proposing a lightweight, parameter-free intervention method that requires no external components. Specifically, we employ sparse autoencoders (SAEs) to interpret hidden-layer representations, identifying and isolating copyright-sensitive, high-dimensional, near-monosemantic subspaces; during decoding, we dynamically suppress activations in these subspaces. Evaluated on standard copyright detection benchmarks, our approach substantially reduces infringement risk while preserving generation quality and general-purpose capabilities. Interpretability analysis confirms that the identified subspaces encode high-level semantic features, validating their relevance to copyright-sensitive content. To our knowledge, this is the first method to address copyright leakage via intrinsic, representation-level intervention without fine-tuning or auxiliary modules.
📝 Abstract
Large language models sometimes inadvertently reproduce passages that are copyrighted, exposing downstream applications to legal risk. Most existing studies for inference-time defences focus on surface-level token matching and rely on external blocklists or filters, which add deployment complexity and may overlook semantically paraphrased leakage. In this work, we reframe copyright infringement mitigation as intrinsic semantic-space control and introduce SCOPE, an inference-time method that requires no parameter updates or auxiliary filters. Specifically, the sparse autoencoder (SAE) projects hidden states into a high-dimensional, near-monosemantic space; benefiting from this representation, we identify a copyright-sensitive subspace and clamp its activations during decoding. Experiments on widely recognized benchmarks show that SCOPE mitigates copyright infringement without degrading general utility. Further interpretability analyses confirm that the isolated subspace captures high-level semantics.