🤖 AI Summary
Existing approximate machine unlearning methods are prone to catastrophic forgetting or training instability in large-scale or adversarial deletion scenarios due to heterogeneity across neural network layers. This work proposes SRAGU, an algorithm that introduces, for the first time, a statistical roughness metric derived from the heavy-tailed spectrum of weight matrices to quantify layer-wise stability. By adaptively reweighting unlearning gradient updates—focusing on spectrally stable layers while suppressing perturbations in unstable ones—SRAGU allocates unlearning updates through spectral analysis. Integrating WeightWatcher-style heavy-tailed exponent estimation with an adaptive gradient unlearning mechanism, the method significantly outperforms baseline approaches in terms of behavioral alignment, prediction discrepancy, and KL divergence, effectively reducing information leakage and achieving performance closer to that of the gold-standard model retrained from scratch.
📝 Abstract
Machine unlearning aims to remove the influence of a designated forget set from a trained model while preserving utility on the retained data. In modern deep networks, approximate unlearning frequently fails under large or adversarial deletions due to pronounced layer-wise heterogeneity: some layers exhibit stable, well-regularized representations while others are brittle, undertrained, or overfit, so naive update allocation can trigger catastrophic forgetting or unstable dynamics. We propose Statistical-Roughness Adaptive Gradient Unlearning (SRAGU), a mechanism-first unlearning algorithm that reallocates unlearning updates using layer-wise statistical roughness operationalized via heavy-tailed spectral diagnostics of layer weight matrices. Starting from an Adaptive Gradient Unlearning (AGU) sensitivity signal computed on the forget set, SRAGU estimates a WeightWatcher-style heavy-tailed exponent for each layer, maps it to a bounded spectral stability weight, and uses this stability signal to spectrally reweight the AGU sensitivities before applying the same minibatch update form. This concentrates unlearning motion in spectrally stable layers while damping updates in unstable or overfit layers, improving stability under hard deletions. We evaluate unlearning via behavioral alignment to a gold retrained reference model trained from scratch on the retained data, using empirical prediction-divergence and KL-to-gold proxies on a forget-focused query set; we additionally report membership inference auditing as a complementary leakage signal, treating forget-set points as should-be-forgotten members during evaluation.