Uncovering Pretraining Code in LLMs: A Syntax-Aware Attribution Approach

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses copyright compliance concerns regarding open-source code (e.g., GPL-licensed) in large language model (LLM) training data, proposing SynPrune—a syntax-aware membership inference attack for precisely determining whether an LLM has memorized specific copyrighted code samples. Methodologically, SynPrune introduces a novel syntactic filtering mechanism: it leverages parser-derived abstract syntax trees to isolate and discard syntactically mandatory constructs, thereby isolating semantically distinctive, author-style–revealing substructures. It further integrates token-level importance scoring with structured code analysis to enable fine-grained provenance attribution. Experiments demonstrate that SynPrune consistently outperforms state-of-the-art methods across diverse code lengths and syntactic categories, achieving substantial gains in both membership inference accuracy and robustness. The approach provides a reliable, interpretable technical foundation for code copyright auditing and enhancing model transparency.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training data have grown, especially in the context of code. Open-source code, often protected by open source licenses (e.g, GPL), poses legal and ethical challenges when used in pretraining. Detecting whether specific code samples were included in LLM training data is thus critical for transparency, accountability, and copyright compliance. We propose SynPrune, a syntax-pruned membership inference attack method tailored for code. Unlike prior MIA approaches that treat code as plain text, SynPrune leverages the structured and rule-governed nature of programming languages. Specifically, it identifies and excludes consequent tokens that are syntactically required and not reflective of authorship, from attribution when computing membership scores. Experimental results show that SynPrune consistently outperforms the state-of-the-arts. Our method is also robust across varying function lengths and syntax categories.
Problem

Research questions and friction points this paper is trying to address.

Detecting unauthorized copyrighted code in LLM training data
Identifying code samples used in pretraining for legal compliance
Developing syntax-aware attribution methods for membership inference attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Syntax-pruned membership inference attack for code
Excludes syntactically required tokens from attribution
Leverages structured nature of programming languages
🔎 Similar Papers
No similar papers found.
Y
Yuanheng Li
Tongji University
Z
Zhuoyang Chen
Tongji University
X
Xiaoyun Liu
Tongji University
Y
Yuhao Wang
Tongji University
Mingwei Liu
Mingwei Liu
Rutgers University
China laborhigh performance work systems
Y
Yang Shi
Tongji University
Kaifeng Huang
Kaifeng Huang
Tongji Univerisity
OSS Supply ChainSoftware Engineering
S
Shengjie Zhao
Tongji University