Superman: Unifying Skeleton and Vision for Human Motion Perception and Generation

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing approaches in human motion analysis, which typically treat perception and generation as disjoint tasks, thereby hindering unified modeling of visual inputs and temporal skeletal dynamics. To overcome this, we propose Superman, a novel framework that, for the first time, unifies visual perception and 3D skeletal motion generation within a single architecture. Superman leverages a vision-guided motion tokenizer to construct a cross-modal action vocabulary, aligns 3D skeletal and visual data geometrically, and integrates them into a unified multimodal large language model (MLLM). This enables end-to-end joint handling of diverse tasks including 3D pose estimation, motion prediction, and interpolation. Experiments demonstrate that our method achieves state-of-the-art or competitive performance on benchmarks such as Human3.6M, confirming its effectiveness and scalability.

Technology Category

Application Category

📝 Abstract
Human motion analysis tasks, such as temporal 3D pose estimation, motion prediction, and motion in-betweening, play an essential role in computer vision. However, current paradigms suffer from severe fragmentation. First, the field is split between ``perception''models that understand motion from video but only output text, and ``generation''models that cannot perceive from raw visual input. Second, generative MLLMs are often limited to single-frame, static poses using dense, parametric SMPL models, failing to handle temporal motion. Third, existing motion vocabularies are built from skeleton data alone, severing the link to the visual domain. To address these challenges, we introduce Superman, a unified framework that bridges visual perception with temporal, skeleton-based motion generation. Our solution is twofold. First, to overcome the modality disconnect, we propose a Vision-Guided Motion Tokenizer. Leveraging the natural geometric alignment between 3D skeletons and visual data, this module pioneers robust joint learning from both modalities, creating a unified, cross-modal motion vocabulary. Second, grounded in this motion language, a single, unified MLLM architecture is trained to handle all tasks. This module flexibly processes diverse, temporal inputs, unifying 3D skeleton pose estimation from video (perception) with skeleton-based motion prediction and in-betweening (generation). Extensive experiments on standard benchmarks, including Human3.6M, demonstrate that our unified method achieves state-of-the-art or competitive performance across all motion tasks. This showcases a more efficient and scalable path for generative motion analysis using skeletons.
Problem

Research questions and friction points this paper is trying to address.

human motion perception
motion generation
skeleton-based modeling
vision-language alignment
temporal motion analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

unified framework
vision-guided motion tokenizer
cross-modal motion vocabulary
temporal motion generation
skeleton-based MLLM
Xinshun Wang
Xinshun Wang
Peking University
human perception
P
Peiming Li
Peking University
Z
Ziyi Wang
Peking University
Z
Zhongbin Fang
Sun Yat-sen University
Z
Zhichao Deng
Sun Yat-sen University
Songtao Wu
Songtao Wu
AI Researcher, Sony RDC
AI securityHuman computer interactionEdge AI
J
Jason Li
Nanyang Technological University
M
Mengyuan Liu
Peking University