🤖 AI Summary
In text-to-video diffusion models, the coupling mechanism among motion, structure, and identity representations remains poorly understood—particularly causing identity leakage during motion transfer. Method: We first discover that query features (Q-features) in self-attention layers jointly encode both subject identity and motion information, and reveal their coupled regulatory role throughout the denoising process. Based on this insight, we propose a training-free Q-feature disentanglement and controllable injection framework, enabling zero-shot motion transfer and multi-shot video generation with cross-shot identity consistency. Contribution/Results: Experiments demonstrate a 20× improvement in motion transfer efficiency, significantly enhanced motion fidelity, and superior inter-frame identity stability. Our approach establishes a novel paradigm for controllable editing in text-to-video generation.
📝 Abstract
Text-to-video diffusion models have shown remarkable progress in generating coherent video clips from textual descriptions. However, the interplay between motion, structure, and identity representations in these models remains under-explored. Here, we investigate how self-attention query features (a.k.a. Q features) simultaneously govern motion, structure, and identity and examine the challenges arising when these representations interact. Our analysis reveals that Q affects not only layout, but that during denoising Q also has a strong effect on subject identity, making it hard to transfer motion without the side-effect of transferring identity. Understanding this dual role enabled us to control query feature injection (Q injection) and demonstrate two applications: (1) a zero-shot motion transfer method that is 20 times more efficient than existing approaches, and (2) a training-free technique for consistent multi-shot video generation, where characters maintain identity across multiple video shots while Q injection enhances motion fidelity.