🤖 AI Summary
This work addresses the challenge of *controlled forgetting* in GANs when original training data is inaccessible—specifically, the precise removal of sensitive attributes (e.g., gender, race) from generated samples. We propose a two-stage *adapt-then-forget* framework grounded in semantic direction modeling within parameter space. The method integrates negative-sample adaptive fine-tuning, positive-sample adversarial training, and parameter排斥 regularization under gradient constraints to preserve generation fidelity. Theoretically, we characterize the role of排斥 regularization, overcoming the high-fidelity forgetting bottleneck inherent in advanced GANs (e.g., StyleGAN). Extensive experiments on MNIST, AFHQ, and CelebA-HQ demonstrate significant suppression of target attributes, negligible degradation in Fréchet Inception Distance (< 0.5), and near-lossless visual fidelity.
📝 Abstract
Owing to the growing concerns about privacy and regulatory compliance, it is desirable to regulate the output of generative models. To that end, the objective of this work is to prevent the generation of outputs containing undesired features from a pre-trained Generative Adversarial Network (GAN) where the underlying training data set is inaccessible. Our approach is inspired by the observation that the parameter space of GANs exhibits meaningful directions that can be leveraged to suppress specific undesired features. However, such directions usually result in the degradation of the quality of generated samples. Our proposed two-stage method, known as 'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also maintaining the quality of generated samples. In the initial stage, we adapt a pre-trained GAN on a set of negative samples (containing undesired features) provided by the user. Subsequently, we train the original pre-trained GAN using positive samples, along with a repulsion regularizer. This regularizer encourages the learned model parameters to move away from the parameters of the adapted model (first stage) while not degrading the generation quality. We provide theoretical insights into the proposed method. To the best of our knowledge, our approach stands as the first method addressing unlearning within the realm of high-fidelity GANs (such as StyleGAN). We validate the effectiveness of our method through comprehensive experiments, encompassing both class-level unlearning on the MNIST and AFHQ dataset and feature-level unlearning tasks on the CelebA-HQ dataset. Our code and implementation is available at: https://github.com/atriguha/Adapt_Unlearn.