RepoMark: A Code Usage Auditing Framework for Code Large Language Models

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-source code training has sparked significant controversy regarding license compliance and intellectual property rights for large language models (LLMs); existing auditing methods achieve less than 55% detection accuracy on small-scale repositories. To address this, we propose RepoMark—the first verifiable auditing framework specifically designed for code LLM training data. Our method integrates semantic-equivalent code variant generation with ranking-based hypothesis testing to enable efficient watermark detection under stringent false-positive constraints (≤5%). By combining semantics-preserving code mutation with statistical memorization detection, RepoMark establishes an imperceptible, theoretically provable audit mechanism. Experimental results demonstrate that RepoMark achieves over 90% detection success on small repositories—substantially outperforming baseline approaches—and provides open-source authors with a deployable, trustworthy tool for verifying downstream data usage.

Technology Category

Application Category

📝 Abstract
The rapid development of Large Language Models (LLMs) for code generation has transformed software development by automating coding tasks with unprecedented efficiency. However, the training of these models on open-source code repositories (e.g., from GitHub) raises critical ethical and legal concerns, particularly regarding data authorization and open-source license compliance. Developers are increasingly questioning whether model trainers have obtained proper authorization before using repositories for training, especially given the lack of transparency in data collection. To address these concerns, we propose a novel data marking framework RepoMark to audit the data usage of code LLMs. Our method enables repository owners to verify whether their code has been used in training, while ensuring semantic preservation, imperceptibility, and theoretical false detection rate (FDR) guarantees. By generating multiple semantically equivalent code variants, RepoMark introduces data marks into the code files, and during detection, RepoMark leverages a novel ranking-based hypothesis test to detect memorization within the model. Compared to prior data auditing approaches, RepoMark significantly enhances sample efficiency, allowing effective auditing even when the user's repository possesses only a small number of code files. Experiments demonstrate that RepoMark achieves a detection success rate over 90% on small code repositories under a strict FDR guarantee of 5%. This represents a significant advancement over existing data marking techniques, all of which only achieve accuracy below 55% under identical settings. This further validates RepoMark as a robust, theoretically sound, and promising solution for enhancing transparency in code LLM training, which can safeguard the rights of repository owners.
Problem

Research questions and friction points this paper is trying to address.

Auditing unauthorized use of open-source code in LLM training
Ensuring license compliance and data authorization transparency
Detecting code memorization with high accuracy and low false rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates multiple semantically equivalent code variants
Uses ranking-based hypothesis test for memorization detection
Ensures high detection rate with theoretical FDR guarantees
🔎 Similar Papers
No similar papers found.