🤖 AI Summary
This paper addresses the problem of group-wise performance degradation and fairness deterioration in machine unlearning, arising when the forget set exhibits non-uniform distribution—particularly when concentrated within specific demographic groups. To this end, we introduce the novel concept of *group-robust machine unlearning*. To jointly ensure unlearning effectiveness and group fairness, we propose MIU: a method that minimizes mutual information between the model and forgotten samples to reduce dependence on them, while incorporating group-aware sample reweighting and original-model information calibration. Experiments across three benchmark datasets demonstrate that MIU achieves high forgetting success rates while significantly mitigating performance collapse on dominant groups. Crucially, it maintains balanced predictive performance across all demographic groups, outperforming existing baseline methods in both unlearning efficacy and fairness preservation.
📝 Abstract
Machine unlearning is an emerging paradigm to remove the influence of specific training data (i.e., the forget set) from a model while preserving its knowledge of the rest of the data (i.e., the retain set). Previous approaches assume the forget data to be uniformly distributed from all training datapoints. However, if the data to unlearn is dominant in one group, we empirically show that performance for this group degrades, leading to fairness issues. This work tackles the overlooked problem of non-uniformly distributed forget sets, which we call group-robust machine unlearning, by presenting a simple, effective strategy that mitigates the performance loss in dominant groups via sample distribution reweighting. Moreover, we present MIU (Mutual Information-aware Machine Unlearning), the first approach for group robustness in approximate machine unlearning. MIU minimizes the mutual information between model features and group information, achieving unlearning while reducing performance degradation in the dominant group of the forget set. Additionally, MIU exploits sample distribution reweighting and mutual information calibration with the original model to preserve group robustness. We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning.