How Much Do Large Language Models Know about Human Motion? A Case Study in 3D Avatar Control

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) understanding of human motion knowledge through instruction-driven 3D virtual character control. We propose a hierarchical planning framework: a high-level module generates semantically coherent action sequences, while a low-level module outputs joint coordinates; smooth, evaluable animations are synthesized via linear interpolation. Evaluation combines human judgment with automated positional comparison against ground-truth “oracle” trajectories. Crucially, we treat LLMs as verifiable knowledge probes—an approach first applied to motion understanding—revealing that LLMs accurately capture high-level motion intent and culturally specific actions (e.g., bowing, waving), yet struggle with high-degree-of-freedom joint constraints, multi-step complex motions, and fine-grained spatiotemporal parameters. Our work establishes a novel evaluation paradigm for motion understanding, advances embodied AI assessment, and rigorously characterizes the current capabilities and limitations of LLMs in human motion reasoning.

Technology Category

Application Category

📝 Abstract
We explore Large Language Models (LLMs)' human motion knowledge through 3D avatar control. Given a motion instruction, we prompt LLMs to first generate a high-level movement plan with consecutive steps (High-level Planning), then specify body part positions in each step (Low-level Planning), which we linearly interpolate into avatar animations as a clear verification lens for human evaluators. Through carefully designed 20 representative motion instructions with full coverage of basic movement primitives and balanced body part usage, we conduct comprehensive evaluations including human assessment of both generated animations and high-level movement plans, as well as automatic comparison with oracle positions in low-level planning. We find that LLMs are strong at interpreting the high-level body movements but struggle with precise body part positioning. While breaking down motion queries into atomic components improves planning performance, LLMs have difficulty with multi-step movements involving high-degree-of-freedom body parts. Furthermore, LLMs provide reasonable approximation for general spatial descriptions, but fail to handle precise spatial specifications in text, and the precise spatial-temporal parameters needed for avatar control. Notably, LLMs show promise in conceptualizing creative motions and distinguishing culturally-specific motion patterns.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' knowledge of human motion via 3D avatar control
Evaluating LLMs' ability to plan high-level and low-level movements
Identifying LLMs' limitations in precise body part positioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate high-level movement plans
LLMs specify body part positions
Linear interpolation creates avatar animations
🔎 Similar Papers
No similar papers found.