SCOPE: Intrinsic Semantic Space Control for Mitigating Copyright Infringement in LLMs

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) may inadvertently reproduce copyrighted content during generation; existing inference-time defenses—relying on external filters or surface-level string matching—struggle to mitigate semantic paraphrasing-based leakage. This work reframes copyright protection as an internal semantic-space control problem within the LLM, proposing a lightweight, parameter-free intervention method that requires no external components. Specifically, we employ sparse autoencoders (SAEs) to interpret hidden-layer representations, identifying and isolating copyright-sensitive, high-dimensional, near-monosemantic subspaces; during decoding, we dynamically suppress activations in these subspaces. Evaluated on standard copyright detection benchmarks, our approach substantially reduces infringement risk while preserving generation quality and general-purpose capabilities. Interpretability analysis confirms that the identified subspaces encode high-level semantic features, validating their relevance to copyright-sensitive content. To our knowledge, this is the first method to address copyright leakage via intrinsic, representation-level intervention without fine-tuning or auxiliary modules.

Technology Category

Application Category

📝 Abstract
Large language models sometimes inadvertently reproduce passages that are copyrighted, exposing downstream applications to legal risk. Most existing studies for inference-time defences focus on surface-level token matching and rely on external blocklists or filters, which add deployment complexity and may overlook semantically paraphrased leakage. In this work, we reframe copyright infringement mitigation as intrinsic semantic-space control and introduce SCOPE, an inference-time method that requires no parameter updates or auxiliary filters. Specifically, the sparse autoencoder (SAE) projects hidden states into a high-dimensional, near-monosemantic space; benefiting from this representation, we identify a copyright-sensitive subspace and clamp its activations during decoding. Experiments on widely recognized benchmarks show that SCOPE mitigates copyright infringement without degrading general utility. Further interpretability analyses confirm that the isolated subspace captures high-level semantics.
Problem

Research questions and friction points this paper is trying to address.

Mitigating copyright infringement in large language models
Reducing legal risks from reproduced copyrighted passages
Controlling semantic leakage without external filters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intrinsic semantic-space control without parameter updates
Sparse autoencoder projects hidden states into semantic space
Clamp copyright-sensitive subspace activations during decoding
🔎 Similar Papers
No similar papers found.
Z
Zhenliang Zhang
Wangxuan Institute of Computer Technology, Peking University
X
Xinyu Hu
Wangxuan Institute of Computer Technology, Peking University
Xiaojun Wan
Xiaojun Wan
Peking University
Natural Language ProcessingText MiningArtificial Intelligence