MotionV2V: Editing Motion in a Video

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite significant advances in video generation models regarding fidelity and temporal coherence, precise and controllable motion editing of existing videos remains challenging. This paper introduces a novel “motion editing” paradigm: first, extracting sparse motion trajectories from input videos to enable fine-grained, arbitrary-timestep trajectory modifications; second, constructing a motion counterfactual video dataset and designing a motion-conditioned video diffusion architecture to naturally propagate edited trajectories and re-render the video. Our approach unifies sparse trajectory editing with generative resynthesis for the first time, enabling high-fidelity, temporally consistent motion redirection. A user study (four-alternative forced choice) demonstrates that over 65% of participants prefer our results—significantly outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
While generative video models have achieved remarkable fidelity and consistency, applying these capabilities to video editing remains a complex challenge. Recent research has explored motion controllability as a means to enhance text-to-video generation or image animation; however, we identify precise motion control as a promising yet under-explored paradigm for editing existing videos. In this work, we propose modifying video motion by directly editing sparse trajectories extracted from the input. We term the deviation between input and output trajectories a"motion edit"and demonstrate that this representation, when coupled with a generative backbone, enables powerful video editing capabilities. To achieve this, we introduce a pipeline for generating"motion counterfactuals", video pairs that share identical content but distinct motion, and we fine-tune a motion-conditioned video diffusion architecture on this dataset. Our approach allows for edits that start at any timestamp and propagate naturally. In a four-way head-to-head user study, our model achieves over 65 percent preference against prior work. Please see our project page: https://ryanndagreat.github.io/MotionV2V
Problem

Research questions and friction points this paper is trying to address.

Editing motion trajectories in existing videos
Generating motion counterfactuals with identical content
Enabling timestamp-specific motion edits with natural propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Editing video motion via sparse trajectory manipulation
Generating motion counterfactuals for training datasets
Fine-tuning motion-conditioned diffusion models for edits
🔎 Similar Papers
No similar papers found.