Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code language models (CLMs) often memorize sensitive code snippets from training data, posing significant privacy risks. Existing mitigation approaches require full model retraining, incurring prohibitive computational overhead. This paper introduces CodeEraser—the first training-free machine unlearning method tailored for CLMs—that selectively erases sensitive memories via gradient ascent while enforcing structural preservation constraints to maintain code generation correctness. We evaluate CodeEraser on CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, using a newly constructed benchmark of 50,000 high-risk sensitive code samples. Experiments demonstrate that CodeEraser completely suppresses regeneration of sensitive information, with minimal impact on downstream functionality: average performance degradation remains below 1.2% across code completion, translation, and other core tasks. CodeEraser thus achieves strong privacy guarantees without compromising functional integrity.

Technology Category

Application Category

📝 Abstract
While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including training data de-duplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning - a post-hoc modification method that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorized samples as unlearning targets. We study two widely used gradient ascent-based unlearning approaches: the vanilla and constraint-based methods, and introduce CodeEraser, an advanced variant that selectively unlearns sensitive memorized segments in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.
Problem

Research questions and friction points this paper is trying to address.

Erasing sensitive memorization in code language models
Avoiding full retraining with machine unlearning methods
Maintaining model utility while removing confidential information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine unlearning erases sensitive memorization
Gradient ascent methods remove specific information
CodeEraser preserves code integrity and function
🔎 Similar Papers
No similar papers found.