🤖 AI Summary
This work addresses the longstanding conflation in machine unlearning research between “untraining” and “unlearning,” which has led to ambiguous problem formulations and inadequate evaluation criteria. We formally distinguish these concepts for the first time: untraining aims to remove the influence of specific training samples, whereas true unlearning requires erasing the model’s knowledge of the entire underlying data distribution or concept those samples represent. Through theoretical formalization and a systematic review of existing literature, we establish a clear conceptual framework, reclassify current methods accordingly, and uncover critical challenges that have been overlooked. By clarifying foundational definitions, this study lays the groundwork for rigorous algorithmic evaluation, promotes standardization in the field, and delineates promising directions for future research.
📝 Abstract
As models are getting larger and are trained on increasing amounts of data, there has been an explosion of interest into how we can ``delete'' specific data points or behaviours from a trained model, after the fact. This goal has been referred to as ``machine unlearning''. In this note, we argue that the term ``unlearning'' has been overloaded, with different research efforts spanning two distinct problem formulations, but without that distinction having been observed or acknowledged in the literature. This causes various issues, including ambiguity around when an algorithm is expected to work, use of inappropriate metrics and baselines when comparing different algorithms to one another, difficulty in interpreting results, as well as missed opportunities for pursuing critical research directions. In this note, we address this issue by establishing a fundamental distinction between two notions that we identify as \unlearning and \untraining, illustrated in Figure 1. In short, \untraining aims to reverse the effect of having trained on a given forget set, i.e. to remove the influence that that specific forget set examples had on the model during training. On the other hand, the goal of \unlearning is not just to remove the influence of those given examples, but to use those examples for the purpose of more broadly removing the entire underlying distribution from which those examples were sampled (e.g. the concept or behaviour that those examples represent). We discuss technical definitions of these problems and map problem settings studied in the literature to each. We hope to initiate discussions on disambiguating technical definitions and identify a set of overlooked research questions, as we believe that this a key missing step for accelerating progress in the field of ``unlearning''.