Provable unlearning in topic modeling and downstream tasks

📅 2024-11-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the provable forgetting requirement for topic models under data compliance regulations. We propose the first theoretically grounded framework for provable forgetting in LDA-style models. Methodologically, we design a lightweight forgetting algorithm based on a linear adapter head; through rigorous derivation of theoretical error bounds and deletion robustness analysis, we formally characterize the maximum number of samples that can be safely removed, and prove that fine-tuned models inherently support zero-cost forgetting—requiring no updates to the base model. Our contributions are threefold: (1) the first provable forgetting guarantee for topic modeling; (2) computation efficiency independent of training data scale; and (3) support for trustworthy deletion of original training data in downstream tasks—including retrieval and classification—while simultaneously ensuring strict forgetting guarantees and stable model performance.

Technology Category

Application Category

📝 Abstract
Machine unlearning algorithms are increasingly important as legal concerns arise around the provenance of training data, but verifying the success of unlearning is often difficult. Provable guarantees for unlearning are often limited to supervised learning settings. In this paper, we provide the first theoretical guarantees for unlearning in the pre-training and fine-tuning paradigm by studying topic models, simple bag-of-words language models that can be adapted to solve downstream tasks like retrieval and classification. First, we design a provably effective unlearning algorithm for topic models that incurs a computational overhead independent of the size of the original dataset. Our analysis additionally quantifies the deletion capacity of the model -- i.e., the number of examples that can be unlearned without incurring a significant cost in model performance. Finally, we formally extend our analyses to account for adaptation to a given downstream task. In particular, we design an efficient algorithm to perform unlearning after fine-tuning the topic model via a linear head. Notably, we show that it is easier to unlearn pre-training data from models that have been fine-tuned to a particular task, and one can unlearn this data without modifying the base model.
Problem

Research questions and friction points this paper is trying to address.

Provable unlearning guarantees for topic models
Efficient unlearning algorithm with dataset-size-independent overhead
Unlearning pre-training data in fine-tuned models without base modification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Provable unlearning algorithm for topic models
Computational overhead independent of dataset size
Efficient unlearning after fine-tuning via linear head
🔎 Similar Papers
No similar papers found.