Mitigating Sensitive Information Leakage in LLMs4Code through Machine Unlearning

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code large language models (LLMs4Code) pose privacy risks due to memorization of sensitive information from training data. Method: This work presents the first systematic evaluation of machine unlearning techniques for LLMs4Code, integrating three state-of-the-art unlearning algorithms and conducting empirical studies across three prominent open-source LLMs4Code models. We establish a unified benchmark that jointly measures privacy forgetting strength and code generation capability. Contribution/Results: We discover that post-unlearning leakage shifts from direct to indirect, revealing a novel security challenge. Crucially, our approach achieves up to 92.7% reduction in sensitive information recoverability while preserving code generation performance—average degradation remains below 1.2%. These results provide a scalable, empirically grounded technical pathway for privacy-compliant deployment of LLMs4Code.

Technology Category

Application Category

📝 Abstract
Large Language Models for Code (LLMs4Code) excel at code generation tasks, yielding promise to release developers from huge software development burdens. Nonetheless, these models have been shown to suffer from the significant privacy risks due to the potential leakage of sensitive information embedded during training, known as the memorization problem. Addressing this issue is crucial for ensuring privacy compliance and upholding user trust, but till now there is a dearth of dedicated studies in the literature that focus on this specific direction. Recently, machine unlearning has emerged as a promising solution by enabling models to"forget"sensitive information without full retraining, offering an efficient and scalable approach compared to traditional data cleaning methods. In this paper, we empirically evaluate the effectiveness of unlearning techniques for addressing privacy concerns in LLMs4Code.Specifically, we investigate three state-of-the-art unlearning algorithms and three well-known open-sourced LLMs4Code, on a benchmark that takes into consideration both the privacy data to be forgotten as well as the code generation capabilites of these models. Results show that it is feasible to mitigate the privacy concerns of LLMs4Code through machine unlearning while maintain their code generation capabilities at the same time. We also dissect the forms of privacy protection/leakage after unlearning and observe that there is a shift from direct leakage to indirect leakage, which underscores the need for future studies addressing this risk.
Problem

Research questions and friction points this paper is trying to address.

Mitigating sensitive information leakage in LLMs
Evaluating machine unlearning effectiveness for privacy
Maintaining code generation capabilities post-unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine unlearning mitigates privacy risks
Evaluates three unlearning algorithms effectively
Maintains code generation capabilities post-unlearning