🤖 AI Summary
Existing methods rely on 2D pose graphs to guide animation, suffering from poor generalization and loss of critical 3D motion dynamics. This work addresses open-world human image animation by introducing, for the first time, a paradigm that directly models raw 4D motion sequences—encoding spatiotemporal structure and pose jointly. Our core contributions are: (1) 4D Motion Tokenization (4DMoT), which discretizes continuous motion into learnable motion tokens; and (2) Motion-aware Video DiT (MV-DiT), incorporating 4D positional encoding and a dedicated motion attention mechanism to decouple motion modeling from appearance synthesis. Evaluated on FID-VID, our method achieves 6.98—outperforming the second-best approach by 65%. It demonstrates strong generalization and robustness across diverse settings, including single/multi-person scenes, full/upper-body configurations, multiple artistic styles, and complex backgrounds.
📝 Abstract
Human image animation has gained increasing attention and developed rapidly due to its broad applications in digital humans. However, existing methods rely largely on 2D-rendered pose images for motion guidance, which limits generalization and discards essential 3D information for open-world animation. To tackle this problem, we propose MTVCrafter (Motion Tokenization Video Crafter), the first framework that directly models raw 3D motion sequences (i.e., 4D motion) for human image animation. Specifically, we introduce 4DMoT (4D motion tokenizer) to quantize 3D motion sequences into 4D motion tokens. Compared to 2D-rendered pose images, 4D motion tokens offer more robust spatio-temporal cues and avoid strict pixel-level alignment between pose image and character, enabling more flexible and disentangled control. Then, we introduce MV-DiT (Motion-aware Video DiT). By designing unique motion attention with 4D positional encodings, MV-DiT can effectively leverage motion tokens as 4D compact yet expressive context for human image animation in the complex 3D world. Hence, it marks a significant step forward in this field and opens a new direction for pose-guided human video generation. Experiments show that our MTVCrafter achieves state-of-the-art results with an FID-VID of 6.98, surpassing the second-best by 65%. Powered by robust motion tokens, MTVCrafter also generalizes well to diverse open-world characters (single/multiple, full/half-body) across various styles and scenarios. Our video demos and code are provided in the supplementary material and at this anonymous GitHub link: https://anonymous.4open.science/r/MTVCrafter-1B13.