GeoMotionGPT: Geometry-Aligned Motion Understanding with Large Language Models

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing motion understanding methods struggle to enable large language models (LLMs) to perform fine-grained reasoning about complex actions due to a lack of geometric alignment between motion quantization and semantic embeddings. To bridge this gap, the authors propose a geometry-aware motion–language joint modeling framework that introduces, for the first time, an orthogonality constraint between the motion codebook and the LLM embedding space. A two-stage regularization strategy is designed to balance geometric consistency with semantic adaptability. The approach integrates a Gumbel-Softmax differentiable discrete decoder, sparse orthogonal projection mapping, and an orthogonality-regularized training mechanism. Evaluated on HumanML3D, the method achieves a 20% improvement over the current state of the art, demonstrating the effectiveness of geometric alignment in enhancing LLMs’ motion reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Discrete motion tokenization has recently enabled Large Language Models (LLMs) to serve as versatile backbones for motion understanding and motion-language reasoning. However, existing pipelines typically decouple motion quantization from semantic embedding learning, linking them solely via token IDs. This approach fails to effectively align the intrinsic geometry of the motion space with the embedding space, thereby hindering the LLM's capacity for nuanced motion reasoning. We argue that alignment is most effective when both modalities share a unified geometric basis. Therefore, instead of forcing the LLM to reconstruct the complex geometry among motion tokens from scratch, we present a novel framework that explicitly enforces orthogonality on both the motion codebook and the LLM embedding space, ensuring that their relational structures naturally mirror each other. Specifically, we employ a decoder-only quantizer with Gumbel-Softmax for differentiable training and balanced codebook usage. To bridge the modalities, we use a sparse projection that maps motion codes into the LLM embedding space while preserving orthogonality. Finally, a two-stage orthonormal regularization schedule enforces soft constraints during tokenizer training and LLM fine-tuning to maintain geometric alignment without hindering semantic adaptation. Extensive experiments on HumanML3D demonstrate that our framework achieves a 20% performance improvement over current state-of-the-art methods, validating that a unified geometric basis effectively empowers the LLM for nuanced motion reasoning.
Problem

Research questions and friction points this paper is trying to address.

motion understanding
geometric alignment
large language models
motion tokenization
embedding space
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometry alignment
orthogonal embedding
motion tokenization
large language models
differentiable quantization
🔎 Similar Papers
No similar papers found.