Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing motion-controllable video generation methods suffer from coarse-grained motion control and poor scalability, limiting their practical applicability. To address this, we propose a latent trajectory guidance mechanism: dense point trajectories explicitly model object motion, and motion guidance maps—spatiotemporally aligned with the first-frame latent features—are constructed via propagation in the latent space, eliminating the need for auxiliary motion encoders and enabling fine-grained motion control. Our method seamlessly integrates into mainstream image-to-video diffusion models and supports high-resolution (480p) and long-duration (5-second) video generation. Evaluated on our newly established benchmark MoveBench and public datasets, it significantly outperforms prior approaches. Its motion controllability matches that of Kling 1.5 Pro, while offering superior efficiency, generality, and practical utility.

Technology Category

Application Category

📝 Abstract
We present Wan-Move, a simple and scalable framework that brings motion control to video generative models. Existing motion-controllable methods typically suffer from coarse control granularity and limited scalability, leaving their outputs insufficient for practical use. We narrow this gap by achieving precise and high-quality motion control. Our core idea is to directly make the original condition features motion-aware for guiding video synthesis. To this end, we first represent object motions with dense point trajectories, allowing fine-grained control over the scene. We then project these trajectories into latent space and propagate the first frame's features along each trajectory, producing an aligned spatiotemporal feature map that tells how each scene element should move. This feature map serves as the updated latent condition, which is naturally integrated into the off-the-shelf image-to-video model, e.g., Wan-I2V-14B, as motion guidance without any architecture change. It removes the need for auxiliary motion encoders and makes fine-tuning base models easily scalable. Through scaled training, Wan-Move generates 5-second, 480p videos whose motion controllability rivals Kling 1.5 Pro's commercial Motion Brush, as indicated by user studies. To support comprehensive evaluation, we further design MoveBench, a rigorously curated benchmark featuring diverse content categories and hybrid-verified annotations. It is distinguished by larger data volume, longer video durations, and high-quality motion annotations. Extensive experiments on MoveBench and the public dataset consistently show Wan-Move's superior motion quality. Code, models, and benchmark data are made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Achieves precise motion control in video generation
Removes need for auxiliary motion encoders in models
Enables scalable fine-tuning of base video models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses dense point trajectories for fine-grained motion control
Projects trajectories into latent space for feature alignment
Integrates motion guidance without changing model architecture
🔎 Similar Papers
No similar papers found.