🤖 AI Summary
Large language models (LLMs) often overfit and memorize personally identifiable information (PII) during training, posing severe privacy risks. To address this, we propose Randomized Masking Fine-Tuning (RMFT), a lightweight fine-tuning method that dynamically masks sensitive PII fields during adaptation while preserving model utility via parameter-efficient mechanisms. We further introduce MaxTER, a systematic evaluation framework, and the Area Under the Robustness–Capability curve (AURC) metric—the first to jointly and quantitatively characterize the trade-off between PII extraction rate and language modeling performance. Experiments on the Enron email dataset show RMFT reduces total PII extraction by 80.81% and seen-PII extraction by 80.17%, with only a 5.73% increase in perplexity—substantially outperforming baselines such as deduplication. This work establishes an efficient, rigorously evaluable, and low-overhead paradigm for privacy-preserving LLM training.
📝 Abstract
The current literature on memorization in Natural Language Models, especially Large Language Models (LLMs), poses severe security and privacy risks, as models tend to memorize personally identifying information (PIIs) from training data. We introduce Randomized Masked Fine-Tuning (RMFT), a novel privacy-preserving fine-tuning technique that reduces PII memorization while minimizing performance impact. Using the Enron Email Dataset, we demonstrate that RMFT achieves an 80.81% reduction in Total Extraction Rate and 80.17% reduction in Seen Extraction Rate compared to baseline fine-tuning, outperforming deduplication methods while maintaining only a 5.73% increase in perplexity. We present MaxTER, a Pareto-optimal evaluation framework for assessing privacy-utility tradeoffs, and show the performance of RMFT vs Deduplication by Area Under The Response Curve (AURC) metric.