Deep Generative Continual Learning using Functional LoRA: FunLoRA

πŸ“… 2025-10-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Deep generative models suffer from catastrophic forgetting in continual learning, and existing approaches rely on synthetic data replay, incur high training overhead, and exhibit performance degradation over extended task sequences. This paper proposes FunLoRAβ€”a parameter-efficient continual learning framework requiring only real data from the current task. Its core is Functional Low-Rank Adaptation, which employs rank-1 matrix decomposition coupled with differentiable functional reparameterization to enable dynamic, condition-aware feature modulation, applied via lightweight fine-tuning on flow-matching generative models. FunLoRA eliminates the need for historical data replay or generation, substantially mitigating forgetting. Empirical evaluation on joint continual generation and classification benchmarks demonstrates that FunLoRA surpasses diffusion-based state-of-the-art methods in accuracy, while reducing memory footprint by 37% and sampling latency by 52%.

Technology Category

Application Category

πŸ“ Abstract
Continual adaptation of deep generative models holds tremendous potential and critical importance, given their rapid and expanding usage in text and vision based applications. Incremental training, however, remains highly challenging due to catastrophic forgetting phenomenon, which makes it difficult for neural networks to effectively incorporate new knowledge. A common strategy consists in retraining the generative model on its own synthetic data in order to mitigate forgetting. Yet, such an approach faces two major limitations: (i) the continually increasing training time eventually becomes intractable, and (ii) reliance on synthetic data inevitably leads to long-term performance degradation, since synthetic samples lack the richness of real training data. In this paper, we attenuate these issues by designing a novel and more expressive conditioning mechanism for generative models based on low rank adaptation (LoRA), that exclusively employs rank 1 matrices, whose reparametrized matrix rank is functionally increased using carefully selected functions -- and dubbed functional LoRA: FunLoRA. Using this dynamic conditioning, the generative model is guaranteed to avoid catastrophic forgetting and needs only to be trained on data from the current task. Extensive experiments using flow-matching based models trained from scratch, showcase that our proposed parameter-efficient fine-tuning (PEFT) method surpasses prior state-of-the-art results based on diffusion models, reaching higher classification accuracy scores, while only requiring a fraction of the memory cost and sampling time.
Problem

Research questions and friction points this paper is trying to address.

Mitigating catastrophic forgetting in continual generative learning
Reducing training time and memory costs for incremental adaptation
Preventing performance degradation from synthetic data reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional LoRA uses rank 1 matrices with dynamic conditioning
Method avoids catastrophic forgetting by training only on current task
Parameter-efficient fine-tuning reduces memory cost and sampling time
πŸ”Ž Similar Papers
No similar papers found.
V
Victor Enescu
Sorbonne University, CNRS, LIP6, F-75005 Paris, France
Hichem Sahbi
Hichem Sahbi
CNRS Sorbonne University