Bring Your Dreams to Life: Continual Text-to-Video Customization

πŸ“… 2025-12-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing personalized text-to-video (T2V) generation methods assume static, immutable personalized concepts, limiting incremental introduction of novel subjects and actions, and suffering from catastrophic forgetting and conditional neglect under continual learning. This paper proposes the first continual text-to-video generation framework, addressing these challenges via three core components: (1) a concept-specific attribute preservation module to retain prior knowledge; (2) a task-aware concept aggregation strategy for seamless integration of new concepts; and (3) a region-attention-guided noise estimation mechanism to enhance conditional controllability. Built upon diffusion models, our method synergistically combines adapter-based fine-tuning, layer-specific regional attention, conditional synthesis, and feature alignment. Experiments demonstrate significant improvements over state-of-the-art methods across multiple continual T2V generation benchmarks. The code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Customized text-to-video generation (CTVG) has recently witnessed great progress in generating tailored videos from user-specific text. However, most CTVG methods assume that personalized concepts remain static and do not expand incrementally over time. Additionally, they struggle with forgetting and concept neglect when continuously learning new concepts, including subjects and motions. To resolve the above challenges, we develop a novel Continual Customized Video Diffusion (CCVD) model, which can continuously learn new concepts to generate videos across various text-to-video generation tasks by tackling forgetting and concept neglect. To address catastrophic forgetting, we introduce a concept-specific attribute retention module and a task-aware concept aggregation strategy. They can capture the unique characteristics and identities of old concepts during training, while combining all subject and motion adapters of old concepts based on their relevance during testing. Besides, to tackle concept neglect, we develop a controllable conditional synthesis to enhance regional features and align video contexts with user conditions, by incorporating layer-specific region attention-guided noise estimation. Extensive experimental comparisons demonstrate that our CCVD outperforms existing CTVG models. The code is available at https://github.com/JiahuaDong/CCVD.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in continual video generation
Resolves concept neglect by enhancing regional feature alignment
Enables continuous learning of new subjects and motions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-specific attribute retention module for old concepts
Task-aware concept aggregation strategy during testing
Layer-specific region attention-guided noise estimation
πŸ”Ž Similar Papers
No similar papers found.