🤖 AI Summary
Existing text-to-video methods struggle to robustly transfer motion from a reference object to a target object with substantial appearance or structural differences, often resulting in motion discontinuities and temporal distortions. To address this, we propose a training-free cross-object adaptive motion transfer framework that uniquely integrates high-level semantic feature matching with low-level morphological redirection. Specifically, it achieves semantic alignment via fine-grained reference–target correspondence parsing, while modeling motion dynamics through shape redirection and temporal attention mechanisms. The method enables zero-shot, high-fidelity, temporally consistent motion transfer and supports high-quality, text-driven video generation across arbitrary object pairs. Extensive experiments demonstrate significant improvements over state-of-the-art approaches in complex cross-domain scenarios, establishing a novel paradigm for motion disentanglement and controllable video generation.
📝 Abstract
Existing text-to-video methods struggle to transfer motion smoothly from a reference object to a target object with significant differences in appearance or structure between them. To address this challenge, we introduce MotionShot, a training-free framework capable of parsing reference-target correspondences in a fine-grained manner, thereby achieving high-fidelity motion transfer while preserving coherence in appearance. To be specific, MotionShot first performs semantic feature matching to ensure high-level alignments between the reference and target objects. It then further establishes low-level morphological alignments through reference-to-target shape retargeting. By encoding motion with temporal attention, our MotionShot can coherently transfer motion across objects, even in the presence of significant appearance and structure disparities, demonstrated by extensive experiments. The project page is available at: https://motionshot.github.io/.