🤖 AI Summary
Existing 3D-aware generative models face challenges in identity forgetting—requiring costly retraining, suffering from degraded generation quality, and lacking scalability for multi-identity removal. Method: We propose a fine-tuning-free large-scale generative forgetting framework. Its core innovations are: (1) a personalized proxy latent variable mechanism that disentangles identity representations; and (2) a continual utility preservation objective jointly optimizing differentiable rendering and contrastive losses to maintain image fidelity, diversity, and identity separation during forgetting. The method supports both parallel and sequential forgetting without template reliance or output distortion. Results: Experiments demonstrate efficient removal of up to 200 identities; post-forgetting, the model’s generation capability improves by up to 700% over SOTA methods. Crucially, it preserves high-fidelity 3D-consistent synthesis while establishing the first scalable, practical identity forgetting paradigm for 3D-aware generative models.
📝 Abstract
Recent advances in 3D-aware generative models have enabled high-fidelity image synthesis of human identities. However, this progress raises urgent questions around user consent and the ability to remove specific individuals from a model's output space. We address this by introducing SUGAR, a framework for scalable generative unlearning that enables the removal of many identities (simultaneously or sequentially) without retraining the entire model. Rather than projecting unwanted identities to unrealistic outputs or relying on static template faces, SUGAR learns a personalized surrogate latent for each identity, diverting reconstructions to visually coherent alternatives while preserving the model's quality and diversity. We further introduce a continual utility preservation objective that guards against degradation as more identities are forgotten. SUGAR achieves state-of-the-art performance in removing up to 200 identities, while delivering up to a 700% improvement in retention utility compared to existing baselines. Our code is publicly available at https://github.com/judydnguyen/SUGAR-Generative-Unlearn.