🤖 AI Summary
This paper addresses the challenge of efficiently and verifiably removing specific training data from machine learning models to comply with the “right to be forgotten.” We propose the first model unlearning framework that simultaneously achieves precision and practicality. Methodologically: (1) we introduce MaxRR, a generalized unlearning metric enabling reliable verification even under weak guarantees; (2) we design an unlearning-aware training paradigm integrating model partitioning and core sample selection to achieve exact unlearning; (3) in cases of approximate unlearning, our method closely matches the performance of full retraining. Our key contribution is overcoming the fundamental limitation that weak unlearning guarantees previously precluded trustworthy verification—thereby significantly improving both unlearning efficiency and verifiability. The framework provides the first theoretically sound and empirically viable unlearning solution for regulatory-compliant AI systems.
📝 Abstract
Machine unlearning is essential for meeting legal obligations such as the right to be forgotten, which requires the removal of specific data from machine learning models upon request. While several approaches to unlearning have been proposed, existing solutions often struggle with efficiency and, more critically, with the verification of unlearning - particularly in the case of weak unlearning guarantees, where verification remains an open challenge. We introduce a generalized variant of the standard unlearning metric that enables more efficient and precise unlearning strategies. We also present an unlearning-aware training procedure that, in many cases, allows for exact unlearning. We term our approach MaxRR. When exact unlearning is not feasible, MaxRR still supports efficient unlearning with properties closely matching those achieved through full retraining.