Progressive Human Motion Generation Based on Text and Few Motion Frames

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-motion (T2M) methods struggle to precisely align complex human poses due to inherent ambiguity in natural language descriptions. To address this, we introduce the novel text-frame-to-motion (TF2M) task, which enables highly controllable motion generation conditioned solely on textual input and a minimal number of keyframes (e.g., one). Our method proposes the first uncertainty-aware, progressive multi-stage generation paradigm, featuring a text-frame joint semantic guidance generator and a pseudo-frame replacement training strategy to mitigate error accumulation. We further design a frame-aware encoding scheme and stage-wise uncertainty modeling. Evaluated on standard benchmarks, our approach—using only a single input keyframe—outperforms all existing T2M methods across motion quality, pose fidelity, and text-motion alignment, achieving new state-of-the-art performance in all three metrics.

Technology Category

Application Category

📝 Abstract
Although existing text-to-motion (T2M) methods can produce realistic human motion from text description, it is still difficult to align the generated motion with the desired postures since using text alone is insufficient for precisely describing diverse postures. To achieve more controllable generation, an intuitive way is to allow the user to input a few motion frames describing precise desired postures. Thus, we explore a new Text-Frame-to-Motion (TF2M) generation task that aims to generate motions from text and very few given frames. Intuitively, the closer a frame is to a given frame, the lower the uncertainty of this frame is when conditioned on this given frame. Hence, we propose a novel Progressive Motion Generation (PMG) method to progressively generate a motion from the frames with low uncertainty to those with high uncertainty in multiple stages. During each stage, new frames are generated by a Text-Frame Guided Generator conditioned on frame-aware semantics of the text, given frames, and frames generated in previous stages. Additionally, to alleviate the train-test gap caused by multi-stage accumulation of incorrectly generated frames during testing, we propose a Pseudo-frame Replacement Strategy for training. Experimental results show that our PMG outperforms existing T2M generation methods by a large margin with even one given frame, validating the effectiveness of our PMG. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

Generate human motion from text and few frames
Align generated motion with precise desired postures
Reduce uncertainty in motion generation stages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Motion Generation (PMG) method
Text-Frame Guided Generator with frame-aware semantics
Pseudo-frame Replacement Strategy for training
🔎 Similar Papers
No similar papers found.
L
Ling-an Zeng
School of Artificial Intelligence, Sun Yat-sen University, Zhuhai, Guangdong 519082, China
G
Gaojie Wu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510275, China
Ancong Wu
Ancong Wu
Sun Yat-sen University
Computer VisionContent GenerationAI Robotics
Jian-Fang Hu
Jian-Fang Hu
Sun Yat-sen University
Computer Vision and Machine Learning
Wei-Shi Zheng
Wei-Shi Zheng
Professor @ SUN YAT-SEN UNIVERSITY
Computer VisionPattern RecognitionMachine Learning