🤖 AI Summary
This work addresses the challenge of machine unlearning under long-tailed data distributions, where existing methods suffer from heterogeneity and skewness, leading to unlearning bias. To mitigate this issue, the authors propose a plug-and-play, instance-level dynamic loss reweighting approach that introduces, for the first time, an unlearning-aware mechanism tailored to long-tailed unlearning scenarios. This mechanism adaptively evaluates the unlearning status of each sample by comparing its prediction probability against the distribution of unseen instances from the same class, and dynamically adjusts its loss weight accordingly. Coupled with a balancing factor modulation strategy, the method enables fine-grained and adaptive control over the unlearning process. Extensive experiments demonstrate that the proposed approach significantly outperforms current state-of-the-art techniques across various long-tailed unlearning settings, effectively reducing unlearning bias while preserving model utility and unlearning accuracy.
📝 Abstract
Machine unlearning, which aims to efficiently remove the influence of specific data from trained models, is crucial for upholding data privacy regulations like the ``right to be forgotten". However, existing research predominantly evaluates unlearning methods on relatively balanced forget sets. This overlooks a common real-world scenario where data to be forgotten, such as a user's activity records, follows a long-tailed distribution. Our work is the first to investigate this critical research gap. We find that in such long-tailed settings, existing methods suffer from two key issues: \textit{Heterogeneous Unlearning Deviation} and \textit{Skewed Unlearning Deviation}. To address these challenges, we propose FaLW, a plug-and-play, instance-wise dynamic loss reweighting method. FaLW innovatively assesses the unlearning state of each sample by comparing its predictive probability to the distribution of unseen data from the same class. Based on this, it uses a forgetting-aware reweighting scheme, modulated by a balancing factor, to adaptively adjust the unlearning intensity for each sample. Extensive experiments demonstrate that FaLW achieves superior performance. Code is available at \textbf{Supplementary Material}.