🤖 AI Summary
This paper addresses the provable forgetting requirement for topic models under data compliance regulations. We propose the first theoretically grounded framework for provable forgetting in LDA-style models. Methodologically, we design a lightweight forgetting algorithm based on a linear adapter head; through rigorous derivation of theoretical error bounds and deletion robustness analysis, we formally characterize the maximum number of samples that can be safely removed, and prove that fine-tuned models inherently support zero-cost forgetting—requiring no updates to the base model. Our contributions are threefold: (1) the first provable forgetting guarantee for topic modeling; (2) computation efficiency independent of training data scale; and (3) support for trustworthy deletion of original training data in downstream tasks—including retrieval and classification—while simultaneously ensuring strict forgetting guarantees and stable model performance.
📝 Abstract
Machine unlearning algorithms are increasingly important as legal concerns arise around the provenance of training data, but verifying the success of unlearning is often difficult. Provable guarantees for unlearning are often limited to supervised learning settings. In this paper, we provide the first theoretical guarantees for unlearning in the pre-training and fine-tuning paradigm by studying topic models, simple bag-of-words language models that can be adapted to solve downstream tasks like retrieval and classification. First, we design a provably effective unlearning algorithm for topic models that incurs a computational overhead independent of the size of the original dataset. Our analysis additionally quantifies the deletion capacity of the model -- i.e., the number of examples that can be unlearned without incurring a significant cost in model performance. Finally, we formally extend our analyses to account for adaptation to a given downstream task. In particular, we design an efficient algorithm to perform unlearning after fine-tuning the topic model via a linear head. Notably, we show that it is easier to unlearn pre-training data from models that have been fine-tuned to a particular task, and one can unlearn this data without modifying the base model.