🤖 AI Summary
In human-robot collaboration scenarios (e.g., rehabilitation, sports, manufacturing), virtual avatars and robots require realistic, individualized motion replication—yet existing models fail to capture subject-specific kinematic traits such as velocity distributions and amplitude envelopes.
Method: We propose the first fully data-driven framework for personalized human motion generation, employing an LSTM-based temporal generative model trained end-to-end on scalar oscillatory motion data from real human subjects.
Contribution/Results: Our approach accurately encodes individual rhythmic and amplitude characteristics while preserving inter-subject variability. Quantitative evaluation demonstrates superior motion similarity over current state-of-the-art methods. Crucially, it achieves the first distinguishable and reproducible modeling of subject-specific movement patterns—establishing a foundational motion modeling capability for natural group-level interactions in XR avatars and embodied agents.
📝 Abstract
The deployment of autonomous virtual avatars (in extended reality) and robots in human group activities - such as rehabilitation therapy, sports, and manufacturing - is expected to increase as these technologies become more pervasive. Designing cognitive architectures and control strategies to drive these agents requires realistic models of human motion. However, existing models only provide simplified descriptions of human motor behavior. In this work, we propose a fully data-driven approach, based on Long Short-Term Memory neural networks, to generate original motion that captures the unique characteristics of specific individuals. We validate the architecture using real data of scalar oscillatory motion. Extensive analyses show that our model effectively replicates the velocity distribution and amplitude envelopes of the individual it was trained on, remaining different from other individuals, and outperforming state-of-the-art models in terms of similarity to human data.