🤖 AI Summary
This work addresses the challenge of catastrophic forgetting of source-domain identity knowledge when fine-tuning generative models under extreme few-shot conditions (fewer than 10 images), which often leads to degraded generation quality. To mitigate this issue, the authors propose a novel approach that integrates identity injection with feature consistency alignment. The method employs an identity injection module, an identity replacement mechanism, style-content disentanglement, and reconstruction modulation to effectively preserve source-domain identity information during target-domain adaptation. Extensive experiments on multiple public datasets demonstrate that the proposed method significantly outperforms current state-of-the-art techniques, achieving consistent improvements across five evaluation metrics and simultaneously enhancing both identity fidelity and overall visual quality of the generated images.
📝 Abstract
Training generative models with limited data presents severe challenges of mode collapse. A common approach is to adapt a large pretrained generative model upon a target domain with very few samples (fewer than 10), known as few-shot generative model adaptation. However, existing methods often suffer from forgetting source domain identity knowledge during adaptation, which degrades the quality of generated images in the target domain. To address this, we propose Identity Injection and Preservation (I$^2$P), which leverages identity injection and consistency alignment to preserve the source identity knowledge. Specifically, we first introduce an identity injection module that integrates source domain identity knowledge into the target domain's latent space, ensuring the generated images retain key identity knowledge of the source domain. Second, we design an identity substitution module, which includes a style-content decoupler and a reconstruction modulator, to further enhance source domain identity preservation. We enforce identity consistency constraints by aligning features from identity substitution, thereby preserving identity knowledge. Both quantitative and qualitative experiments show that our method achieves substantial improvements over state-of-the-art methods on multiple public datasets and 5 metrics.