FlowCoMotion: Text-to-Motion Generation via Token-Latent Flow Modeling

📅 2026-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-motion generation methods struggle to simultaneously capture semantic meaning and fine-grained motion details: continuous representations often conflate semantics with dynamics, while discrete representations sacrifice fine-grained information. To address this, this work proposes FlowCoMotion, a novel framework that unifies the strengths of both continuous and discrete motion representations for the first time. FlowCoMotion employs a token-latent coupling network to jointly model semantic content and precise motion characteristics. The approach integrates multi-view distillation, discrete time-resolution quantization, and ODE-based velocity field modeling, leveraging flow matching to generate high-fidelity motion sequences by integrating from a simple prior. Evaluated on the HumanML3D and SnapMoGen benchmarks, the method achieves competitive performance.

Technology Category

Application Category

📝 Abstract
Text-to-motion generation is driven by learning motion representations for semantic alignment with language. Existing methods rely on either continuous or discrete motion representations. However, continuous representations entangle semantics with dynamics, while discrete representations lose fine-grained motion details. In this context, we propose FlowCoMotion, a novel motion generation framework that unifies both treatments from a modeling perspective. Specifically, FlowCoMotion employs token-latent coupling to capture both semantic content and high-fidelity motion details. In the latent branch, we apply multi-view distillation to regularize the continuous latent space, while in the token branch we use discrete temporal resolution quantization to extract high-level semantic cues. The motion latent is then obtained by combining the representations from the two branches through a token-latent coupling network. Subsequently, a velocity field is predicted based on the textual conditions. An ODE solver integrates this velocity field from a simple prior, thereby guiding the sample to the potential state of the target motion. Extensive experiments show that FlowCoMotion achieves competitive performance on text-to-motion benchmarks, including HumanML3D and SnapMoGen.
Problem

Research questions and friction points this paper is trying to address.

text-to-motion generation
motion representation
semantic alignment
continuous representation
discrete representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

token-latent coupling
multi-view distillation
discrete temporal quantization
velocity field
ODE-based generation
🔎 Similar Papers
No similar papers found.
D
Dawei Guan
School of Artificial Intelligence and Data Science, University of Science and Technology of China
Di Yang
Di Yang
School of Mathematical Sciences, University of Science and Technology of China
mathematics
C
Chengjie Jin
School of Artificial Intelligence and Data Science, University of Science and Technology of China
Jiangtao Wang
Jiangtao Wang
Coventry University, United Kingdom
AI for HealthCrowd SensingUbiquitous ComputingDigital Health