🤖 AI Summary
This work addresses the limitation of existing approaches in human motion analysis, which typically treat perception and generation as disjoint tasks, thereby hindering unified modeling of visual inputs and temporal skeletal dynamics. To overcome this, we propose Superman, a novel framework that, for the first time, unifies visual perception and 3D skeletal motion generation within a single architecture. Superman leverages a vision-guided motion tokenizer to construct a cross-modal action vocabulary, aligns 3D skeletal and visual data geometrically, and integrates them into a unified multimodal large language model (MLLM). This enables end-to-end joint handling of diverse tasks including 3D pose estimation, motion prediction, and interpolation. Experiments demonstrate that our method achieves state-of-the-art or competitive performance on benchmarks such as Human3.6M, confirming its effectiveness and scalability.
📝 Abstract
Human motion analysis tasks, such as temporal 3D pose estimation, motion prediction, and motion in-betweening, play an essential role in computer vision. However, current paradigms suffer from severe fragmentation. First, the field is split between ``perception''models that understand motion from video but only output text, and ``generation''models that cannot perceive from raw visual input. Second, generative MLLMs are often limited to single-frame, static poses using dense, parametric SMPL models, failing to handle temporal motion. Third, existing motion vocabularies are built from skeleton data alone, severing the link to the visual domain. To address these challenges, we introduce Superman, a unified framework that bridges visual perception with temporal, skeleton-based motion generation. Our solution is twofold. First, to overcome the modality disconnect, we propose a Vision-Guided Motion Tokenizer. Leveraging the natural geometric alignment between 3D skeletons and visual data, this module pioneers robust joint learning from both modalities, creating a unified, cross-modal motion vocabulary. Second, grounded in this motion language, a single, unified MLLM architecture is trained to handle all tasks. This module flexibly processes diverse, temporal inputs, unifying 3D skeleton pose estimation from video (perception) with skeleton-based motion prediction and in-betweening (generation). Extensive experiments on standard benchmarks, including Human3.6M, demonstrate that our unified method achieves state-of-the-art or competitive performance across all motion tasks. This showcases a more efficient and scalable path for generative motion analysis using skeletons.