Stand-In: A Lightweight and Plug-and-Play Identity Control for Video Generation

๐Ÿ“… 2025-08-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the problem of high-fidelity, lightweight, user-specified identity control in video generation. Methodologically, we propose a plug-and-play fine-tuning framework that introduces a conditional image branch and a constrained self-attention mechanism, coupled with conditional positional mapping to achieve precise identity preservation. The approach fine-tunes only ~1% of parameters from a pre-trained video diffusion model and requires merely 2,000 paired samples for adaptation. It is compatible with mainstream AIGC tools and supports diverse tasksโ€”including character-driven generation, pose reference, stylization, and face swapping. Experimental results demonstrate that our method significantly outperforms full-parameter fine-tuning baselines in both identity consistency and visual quality, while offering superior efficiency, generalizability, and deployment friendliness.

Technology Category

Application Category

๐Ÿ“ Abstract
Generating high-fidelity human videos that match user-specified identities is important yet challenging in the field of generative AI. Existing methods often rely on an excessive number of training parameters and lack compatibility with other AIGC tools. In this paper, we propose Stand-In, a lightweight and plug-and-play framework for identity preservation in video generation. Specifically, we introduce a conditional image branch into the pre-trained video generation model. Identity control is achieved through restricted self-attentions with conditional position mapping, and can be learned quickly with only 2000 pairs. Despite incorporating and training just $sim$1% additional parameters, our framework achieves excellent results in video quality and identity preservation, outperforming other full-parameter training methods. Moreover, our framework can be seamlessly integrated for other tasks, such as subject-driven video generation, pose-referenced video generation, stylization, and face swapping.
Problem

Research questions and friction points this paper is trying to address.

Achieve identity control in video generation with minimal parameters
Ensure compatibility with existing AIGC tools for seamless integration
Maintain high video quality while preserving user-specified identities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight plug-and-play identity control framework
Conditional image branch with restricted self-attentions
Minimal training data and parameters for integration
๐Ÿ”Ž Similar Papers
No similar papers found.
Bowen Xue
Bowen Xue
Undergraduate Student, University of Science and Technology of China
Video GenerationImage Generation
Qixin Yan
Qixin Yan
Wechat, Tencent
AIGCimage/video generationimage processing
W
Wenjing Wang
WeChat Vision, Tencent Inc.
H
Hao Liu
WeChat Vision, Tencent Inc.
C
Chen Li
WeChat Vision, Tencent Inc.