LatentMove: Towards Complex Human Movement Video Generation

๐Ÿ“… 2025-05-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing image-to-video (I2V) methods suffer from severe deformation artifacts and temporal inconsistency when synthesizing complex, non-repetitive human motions. To address this, we propose the first I2V framework specifically designed for complex human motion generation: a Diffusion Transformer (DiT)-based architecture incorporating a conditional control branch and learnable face/body tokens, jointly enabling explicit human structural modeling and latent-space temporal optimization. We further introduce CHVโ€”the first benchmark dedicated to complex human motion videosโ€”and design two novel evaluation metrics: optical flow error and silhouette consistency. On CHV, our method reduces optical flow error by 37% and improves silhouette matching accuracy by 42% over state-of-the-art approaches, demonstrating substantial gains in motion fidelity and temporal coherence.

Technology Category

Application Category

๐Ÿ“ Abstract
Image-to-video (I2V) generation seeks to produce realistic motion sequences from a single reference image. Although recent methods exhibit strong temporal consistency, they often struggle when dealing with complex, non-repetitive human movements, leading to unnatural deformations. To tackle this issue, we present LatentMove, a DiT-based framework specifically tailored for highly dynamic human animation. Our architecture incorporates a conditional control branch and learnable face/body tokens to preserve consistency as well as fine-grained details across frames. We introduce Complex-Human-Videos (CHV), a dataset featuring diverse, challenging human motions designed to benchmark the robustness of I2V systems. We also introduce two metrics to assess the flow and silhouette consistency of generated videos with their ground truth. Experimental results indicate that LatentMove substantially improves human animation quality--particularly when handling rapid, intricate movements--thereby pushing the boundaries of I2V generation. The code, the CHV dataset, and the evaluation metrics will be available at https://github.com/ --.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic human motion videos from single images
Handling complex, non-repetitive human movements without deformations
Improving temporal consistency and detail preservation in I2V generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiT-based framework for dynamic human animation
Conditional control branch and learnable tokens
Complex-Human-Videos dataset for benchmarking
๐Ÿ”Ž Similar Papers
No similar papers found.