🤖 AI Summary
Existing methods struggle to model stylistic features in pose-guided motion transfer, while style transfer approaches heavily rely on scarce motion-capture data and often produce physically implausible motions. This paper introduces a novel task—video-to-video personalized motion transfer—enabling learning and transferring character-specific motion patterns directly from unconstrained monocular videos. To this end, we construct PersonaVid, the first large-scale video dataset tailored for personalized motion. We propose a physics-aware regularization framework that explicitly enforces joint dynamics and ground contact constraints. Integrating deep video sequence modeling, style encoding, and pose-guided generation, our end-to-end architecture synthesizes stylized motions faithfully. Experiments demonstrate significant improvements over state-of-the-art methods across diverse action categories and stylistic transfers. Our work establishes a new benchmark for video-driven personalized motion transfer, advancing both realism and generalizability in human motion synthesis.
📝 Abstract
Recent advances in motion generation show remarkable progress. However, several limitations remain: (1) Existing pose-guided character motion transfer methods merely replicate motion without learning its style characteristics, resulting in inexpressive characters. (2) Motion style transfer methods rely heavily on motion capture data, which is difficult to obtain. (3) Generated motions sometimes violate physical laws. To address these challenges, this paper pioneers a new task: Video-to-Video Motion Personalization. We propose a novel framework, PersonaAnimator, which learns personalized motion patterns directly from unconstrained videos. This enables personalized motion transfer. To support this task, we introduce PersonaVid, the first video-based personalized motion dataset. It contains 20 motion content categories and 120 motion style categories. We further propose a Physics-aware Motion Style Regularization mechanism to enforce physical plausibility in the generated motions. Extensive experiments show that PersonaAnimator outperforms state-of-the-art motion transfer methods and sets a new benchmark for the Video-to-Video Motion Personalization task.