SMooGPT: Stylized Motion Generation using Large Language Models

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing stylized motion generation methods suffer from poor interpretability, coarse-grained control, limited generalization, and confinement to single-motion categories. To address these limitations, we propose the first large language model (LLM)-based, text-driven framework for stylized motion generation. Our approach introduces a novel three-stage paradigm—reasoning, composition, and generation—and constructs an interpretable, body-part-centric intermediate textual representation space to explicitly disentangle motion content from stylistic attributes. The method supports open-vocabulary natural language instructions (e.g., “walk in circles like a monkey”) and exhibits strong semantic understanding and cross-style generalization. Extensive experiments and user studies demonstrate that our framework significantly outperforms prior work in motion diversity, physical plausibility, and style fidelity.

Technology Category

Application Category

📝 Abstract
Stylized motion generation is actively studied in computer graphics, especially benefiting from the rapid advances in diffusion models. The goal of this task is to produce a novel motion respecting both the motion content and the desired motion style, e.g., ``walking in a loop like a Monkey''. Existing research attempts to address this problem via motion style transfer or conditional motion generation. They typically embed the motion style into a latent space and guide the motion implicitly in a latent space as well. Despite the progress, their methods suffer from low interpretability and control, limited generalization to new styles, and fail to produce motions other than ``walking'' due to the strong bias in the public stylization dataset. In this paper, we propose to solve the stylized motion generation problem from a new perspective of reasoning-composition-generation, based on our observations: i) human motion can often be effectively described using natural language in a body-part centric manner, ii) LLMs exhibit a strong ability to understand and reason about human motion, and iii) human motion has an inherently compositional nature, facilitating the new motion content or style generation via effective recomposing. We thus propose utilizing body-part text space as an intermediate representation, and present SMooGPT, a fine-tuned LLM, acting as a reasoner, composer, and generator when generating the desired stylized motion. Our method executes in the body-part text space with much higher interpretability, enabling fine-grained motion control, effectively resolving potential conflicts between motion content and style, and generalizes well to new styles thanks to the open-vocabulary ability of LLMs. Comprehensive experiments and evaluations, and a user perceptual study, demonstrate the effectiveness of our approach, especially under the pure text-driven stylized motion generation.
Problem

Research questions and friction points this paper is trying to address.

Generating novel motions with specific content and style
Overcoming low interpretability and limited style generalization
Enabling fine-grained control via body-part text reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses fine-tuned LLM as motion reasoner and generator
Leverages body-part text space for interpretable control
Employs reasoning-composition-generation framework for motion
🔎 Similar Papers
No similar papers found.
L
Lei Zhong
University of Edinburgh, United Kingdom
Y
Yi Yang
University of Edinburgh, United Kingdom
Changjian Li
Changjian Li
Assistant Professor at University of Edinburgh
Computer Graphics3D VisionGeometry Analysis and ProcessingMedical Image Analysis