๐ค AI Summary
This work addresses the problem of high-fidelity, lightweight, user-specified identity control in video generation. Methodologically, we propose a plug-and-play fine-tuning framework that introduces a conditional image branch and a constrained self-attention mechanism, coupled with conditional positional mapping to achieve precise identity preservation. The approach fine-tunes only ~1% of parameters from a pre-trained video diffusion model and requires merely 2,000 paired samples for adaptation. It is compatible with mainstream AIGC tools and supports diverse tasksโincluding character-driven generation, pose reference, stylization, and face swapping. Experimental results demonstrate that our method significantly outperforms full-parameter fine-tuning baselines in both identity consistency and visual quality, while offering superior efficiency, generalizability, and deployment friendliness.
๐ Abstract
Generating high-fidelity human videos that match user-specified identities is important yet challenging in the field of generative AI. Existing methods often rely on an excessive number of training parameters and lack compatibility with other AIGC tools. In this paper, we propose Stand-In, a lightweight and plug-and-play framework for identity preservation in video generation. Specifically, we introduce a conditional image branch into the pre-trained video generation model. Identity control is achieved through restricted self-attentions with conditional position mapping, and can be learned quickly with only 2000 pairs. Despite incorporating and training just $sim$1% additional parameters, our framework achieves excellent results in video quality and identity preservation, outperforming other full-parameter training methods. Moreover, our framework can be seamlessly integrated for other tasks, such as subject-driven video generation, pose-referenced video generation, stylization, and face swapping.