🤖 AI Summary
This work addresses the challenge of dynamic concept personalization in text-to-video generation, proposing the first method capable of jointly modeling appearance and motion characteristics from a single reference video. Methodologically, it introduces a Set-and-Sequence two-stage framework: (1) a set-based stage fine-tunes LoRA on an unordered frame set to decouple and learn static appearance; (2) a sequence-based stage incorporates Motion Residuals into the joint spatiotemporal weight space of a DiT backbone to explicitly capture temporal motion patterns. Crucially, this paradigm avoids explicit spatiotemporal feature disentanglement while preserving high editability and cross-scene composability. Experiments demonstrate that the method significantly improves dynamic concept fidelity and generalization under one-shot conditions, establishing a new benchmark for dynamic concept personalization.
📝 Abstract
Personalizing generative text-to-image models has seen remarkable progress, but extending this personalization to text-to-video models presents unique challenges. Unlike static concepts, personalizing text-to-video models has the potential to capture dynamic concepts, i.e., entities defined not only by their appearance but also by their motion. In this paper, we introduce Set-and-Sequence, a novel framework for personalizing Diffusion Transformers (DiTs)-based generative video models with dynamic concepts. Our approach imposes a spatio-temporal weight space within an architecture that does not explicitly separate spatial and temporal features. This is achieved in two key stages. First, we fine-tune Low-Rank Adaptation (LoRA) layers using an unordered set of frames from the video to learn an identity LoRA basis that represents the appearance, free from temporal interference. In the second stage, with the identity LoRAs frozen, we augment their coefficients with Motion Residuals and fine-tune them on the full video sequence, capturing motion dynamics. Our Set-and-Sequence framework results in a spatio-temporal weight space that effectively embeds dynamic concepts into the video model's output domain, enabling unprecedented editability and compositionality while setting a new benchmark for personalizing dynamic concepts.