π€ AI Summary
Existing large language models (LLMs) predominantly rely on approximate unlearning methods for data deletion, which suffer from security vulnerabilities and susceptibility to information recovery attacks. This paper proposes a model-merging-based exact unlearning framework that overcomes two key bottlenecks in conventional merging: task performance degradation and cross-task information leakageβboth of which inflate deletion costs. Its core innovation is Sign-Fixed Tuning-Masks (SIFT-Masks), which jointly leverage sign-aligned fine-tuning and lightweight pre-merging masks to achieve provably exact unlearning for the first time at merging scales ranging from ~100 to 500 models. Experiments demonstrate that, under strict unlearning guarantees, our method improves task accuracy by 5β80%, reduces computational overhead to 1/250 of baseline approaches, and simultaneously ensures security, efficiency, and practicality.
π Abstract
Approximate unlearning has gained popularity as an approach to efficiently update an LLM so that it behaves (roughly) as if it was not trained on a subset of data to begin with. However, existing methods are brittle in practice and can easily be attacked to reveal supposedly unlearned information. To alleviate issues with approximate unlearning, we instead propose SIFT-Masks (SIgn-Fixed Tuning-Masks), an exact unlearning method based on model merging. SIFT-Masks addresses two key limitations of standard model merging: (1) merging a large number of tasks can severely harm utility; and (2) methods that boost utility by sharing extra information across tasks make exact unlearning prohibitively expensive. SIFT-Masks solves these issues by (1) applying local masks to recover task-specific performance; and (2) constraining finetuning to align with a global sign vector as a lightweight approach to determine masks independently before merging. Across four settings where we merge up to 500 models, SIFT-Masks improves accuracy by 5-80% over naive merging and uses up to 250x less compute for exact unlearning compared to other merging baselines.