🤖 AI Summary
To address the scarcity and degradation of motion data caused by occlusion or out-of-frame conditions in web videos—leading to missing body parts—we propose a robust motion generation method based on part-wise confidence estimation. Our approach features: (1) a novel part-level confidence detection mechanism that dynamically identifies and masks low-confidence keypoints; (2) a part-aware variational autoencoder coupled with a masked autoregressive modeling framework, enabling noise-robust motion sequence reconstruction; and (3) K700-M, the first large-scale, real-world motion benchmark. Experiments demonstrate significant improvements over baselines in motion quality, semantic consistency, and diversity—on both clean and noisy data. K700-M establishes a standardized evaluation protocol for motion generation in realistic scenarios.
📝 Abstract
Extracting human motion from large-scale web videos offers a scalable solution to the data scarcity issue in character animation. However, some human parts in many video frames cannot be seen due to off-screen captures or occlusions. It brings a dilemma: discarding the data missing any part limits scale and diversity, while retaining it compromises data quality and model performance.
To address this problem, we propose leveraging credible part-level data extracted from videos to enhance motion generation via a robust part-aware masked autoregression model. First, we decompose a human body into five parts and detect the parts clearly seen in a video frame as "credible". Second, the credible parts are encoded into latent tokens by our proposed part-aware variational autoencoder. Third, we propose a robust part-level masked generation model to predict masked credible parts, while ignoring those noisy parts.
In addition, we contribute K700-M, a challenging new benchmark comprising approximately 200k real-world motion sequences, for evaluation. Experimental results indicate that our method successfully outperforms baselines on both clean and noisy datasets in terms of motion quality, semantic consistency and diversity. Project page: https://boyuaner.github.io/ropar-main/