🤖 AI Summary
This paper addresses the challenge of continual learning for text-to-image diffusion models under strict privacy and resource constraints—specifically, enabling multi-round personalized concept acquisition from users without historical data replay, zero additional storage, and no compromise on privacy.
Method: We propose the first framework leveraging a diffusion classifier (DC) to model class-conditional density priors, establishing dual regularization in both parameter and function spaces. Integrated with LoRA-based fine-tuning, our approach achieves replay-free, zero-storage, privacy-preserving continual personalization without accessing past data or expanding model parameters.
Contribution/Results: Our method significantly outperforms state-of-the-art baselines (e.g., C-LoRA) across multiple benchmarks, achieving superior balance between novel concept acquisition efficiency and retention stability of previously learned concepts. It establishes a new paradigm for private, lightweight, and sustainable personalization of diffusion models.
📝 Abstract
Personalized text-to-image diffusion models have grown popular for their ability to efficiently acquire a new concept from user-defined text descriptions and a few images. However, in the real world, a user may wish to personalize a model on multiple concepts but one at a time, with no access to the data from previous concepts due to storage/privacy concerns. When faced with this continual learning (CL) setup, most personalization methods fail to find a balance between acquiring new concepts and retaining previous ones -- a challenge that continual personalization (CP) aims to solve. Inspired by the successful CL methods that rely on class-specific information for regularization, we resort to the inherent class-conditioned density estimates, also known as diffusion classifier (DC) scores, for continual personalization of text-to-image diffusion models. Namely, we propose using DC scores for regularizing the parameter-space and function-space of text-to-image diffusion models, to achieve continual personalization. Using several diverse evaluation setups, datasets, and metrics, we show that our proposed regularization-based CP methods outperform the state-of-the-art C-LoRA, and other baselines. Finally, by operating in the replay-free CL setup and on low-rank adapters, our method incurs zero storage and parameter overhead, respectively, over the state-of-the-art. Our project page: https://srvcodes.github.io/continual_personalization/