UnPII: Unlearning Personally Identifiable Information with Quantifiable Exposure Risk

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing machine unlearning methods that apply uniform strategies to all personally identifiable information (PII), disregarding their varying privacy risks and operational impacts. To overcome this, we propose a PII-centric, priority-driven unlearning framework that introduces, for the first time, a quantifiable PII Risk Index (PRI) to guide differentiated unlearning strategies by integrating multidimensional risk factors. Our approach seamlessly incorporates mainstream algorithms such as Gradient Ascent, Negative Preference Optimization, and Direct Preference Optimization. Experimental results demonstrate that the proposed method incurs only a 27.5% increase in fine-tuning overhead while achieving improvements of up to 11.8% in accuracy, 6.3% in utility, and 12.4% in generalization. Additionally, we construct a synthetic PII dataset that closely mirrors real-world exposure scenarios to support comprehensive evaluation.

Technology Category

Application Category

📝 Abstract
The ever-increasing adoption of Large Language Models in critical sectors like finance, healthcare, and government raises privacy concerns regarding the handling of sensitive Personally Identifiable Information (PII) during training. In response, regulations such as European Union's General Data Protection Regulation (GDPR) mandate the deletion of PII upon requests, underscoring the need for reliable and cost-effective data removal solutions. Machine unlearning has emerged as a promising direction for selectively forgetting data points. However, existing unlearning techniques typically apply a uniform forgetting strategy that neither accounts for the varying privacy risks posed by different PII attributes nor reflects associated business risks. In this work, we propose UnPII, the first PII-centric unlearning approach that prioritizes forgetting based on the risk of individual or combined PII attributes. To this end, we introduce the PII risk index (PRI), a composite metric that incorporates multiple dimensions of risk factors: identifiability, sensitivity, usability, linkability, permanency, exposability, and compliancy. The PRI enables a nuanced evaluation of privacy risks associated with PII exposures and can be tailored to align with organizational privacy policies. To support realistic assessment, we systematically construct a synthetic PII dataset (e.g., 1,700 PII instances) that simulates realistic exposure scenarios. UnPII seamlessly integrates with established unlearning algorithms, such as Gradient Ascent, Negative Preference Optimization, and Direct Preference Optimization, without modifying their underlying principles. Our experimental results demonstrate that UnPII achieves the improvements of accuracy up to 11.8%, utility up to 6.3%, and generalizability up to 12.4%, respectively, while incurring a modest fine-tuning overhead of 27.5% on average during unlearning.
Problem

Research questions and friction points this paper is trying to address.

Personally Identifiable Information
Machine Unlearning
Privacy Risk
Data Deletion
GDPR
Innovation

Methods, ideas, or system contributions that make the work stand out.

PII unlearning
privacy risk quantification
PII risk index
machine unlearning
personalized forgetting
🔎 Similar Papers
No similar papers found.
I
Intae Jeon
Samsung Research
Y
Yujeong Kwon
Sungkyunkwan University
Hyungjoon Koo
Hyungjoon Koo
Sungkyunkwan University
Software securitySystem securityDigital Forensics