๐ค AI Summary
Existing image-to-video (I2V) methods suffer from severe deformation artifacts and temporal inconsistency when synthesizing complex, non-repetitive human motions. To address this, we propose the first I2V framework specifically designed for complex human motion generation: a Diffusion Transformer (DiT)-based architecture incorporating a conditional control branch and learnable face/body tokens, jointly enabling explicit human structural modeling and latent-space temporal optimization. We further introduce CHVโthe first benchmark dedicated to complex human motion videosโand design two novel evaluation metrics: optical flow error and silhouette consistency. On CHV, our method reduces optical flow error by 37% and improves silhouette matching accuracy by 42% over state-of-the-art approaches, demonstrating substantial gains in motion fidelity and temporal coherence.
๐ Abstract
Image-to-video (I2V) generation seeks to produce realistic motion sequences from a single reference image. Although recent methods exhibit strong temporal consistency, they often struggle when dealing with complex, non-repetitive human movements, leading to unnatural deformations. To tackle this issue, we present LatentMove, a DiT-based framework specifically tailored for highly dynamic human animation. Our architecture incorporates a conditional control branch and learnable face/body tokens to preserve consistency as well as fine-grained details across frames. We introduce Complex-Human-Videos (CHV), a dataset featuring diverse, challenging human motions designed to benchmark the robustness of I2V systems. We also introduce two metrics to assess the flow and silhouette consistency of generated videos with their ground truth. Experimental results indicate that LatentMove substantially improves human animation quality--particularly when handling rapid, intricate movements--thereby pushing the boundaries of I2V generation. The code, the CHV dataset, and the evaluation metrics will be available at https://github.com/ --.