🤖 AI Summary
Existing video personalization methods suffer from three key limitations: weak multi-subject support, poor generalization, and the absence of open evaluation benchmarks. To address these, we propose the first video diffusion model with built-in open-set multi-subject personalization capability, enabling zero-shot customization of arbitrary foreground subjects (e.g., humans, pets, or objects) against diverse backgrounds—without test-time optimization. Methodologically, we introduce a Diffusion Transformer module incorporating cross-modal attention between reference images and subject-level textual prompts, and design an automated data augmentation pipeline featuring large-scale frame sampling, synthetic pairing, and aggressive image augmentation. Our contributions include: (1) the first open-set video personalization benchmark explicitly targeting subject fidelity; and (2) substantial improvements in identity consistency, scene generalization, and multi-subject composability—demonstrated through comprehensive quantitative and qualitative comparisons that surpass state-of-the-art methods.
📝 Abstract
Video personalization methods allow us to synthesize videos with specific concepts such as people, pets, and places. However, existing methods often focus on limited domains, require time-consuming optimization per subject, or support only a single subject. We present Video Alchemist $-$ a video model with built-in multi-subject, open-set personalization capabilities for both foreground objects and background, eliminating the need for time-consuming test-time optimization. Our model is built on a new Diffusion Transformer module that fuses each conditional reference image and its corresponding subject-level text prompt with cross-attention layers. Developing such a large model presents two main challenges: dataset and evaluation. First, as paired datasets of reference images and videos are extremely hard to collect, we sample selected video frames as reference images and synthesize a clip of the target video. However, while models can easily denoise training videos given reference frames, they fail to generalize to new contexts. To mitigate this issue, we design a new automatic data construction pipeline with extensive image augmentations. Second, evaluating open-set video personalization is a challenge in itself. To address this, we introduce a personalization benchmark that focuses on accurate subject fidelity and supports diverse personalization scenarios. Finally, our extensive experiments show that our method significantly outperforms existing personalization methods in both quantitative and qualitative evaluations.