๐ค AI Summary
This work addresses the challenge of catastrophic forgetting in customized diffusion models during continual learning of new concepts, a problem exacerbated by the neglect of positive knowledge transfer between related concepts in existing approaches. To this end, we propose the FLLP framework, which introducesโ for the first timeโa hierarchical parent-child concept modeling mechanism grounded in hyperbolic geometry. By embedding concepts into the Lorentz manifold, our method integrates hierarchical priors into the continual learning process, enabling previously acquired concepts to guide the acquisition of new ones. This approach not only mitigates catastrophic forgetting but also enhances knowledge reuse, leading to significant improvements in model robustness, generalization, and continual learning performance across three public datasets and a synthetic benchmark.
๐ Abstract
Custom Diffusion Models (CDMs) offer impressive capabilities for personalization in generative modeling, yet they remain vulnerable to catastrophic forgetting when learning new concepts sequentially. Existing approaches primarily focus on minimizing interference between concepts, often neglecting the potential for positive inter-concept interactions. In this work, we present Forget Less by Learning from Parents (FLLP), a novel framework that introduces a parent-child inter-concept learning mechanism in hyperbolic space to mitigate forgetting. By embedding concept representations within a Lorentzian manifold, naturally suited to modeling tree-like hierarchies, we define parent-child relationships in which previously learned concepts serve as guidance for adapting to new ones. Our method not only preserves prior knowledge but also supports continual integration of new concepts. We validate FLLP on three public datasets and one synthetic benchmark, showing consistent improvements in both robustness and generalization.