FineMotion: A Dataset and Benchmark with both Spatial and Temporal Annotation for Fine-grained Motion Generation and Editing

๐Ÿ“… 2025-07-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing text-to-motion approaches struggle to accurately model fine-grained spatiotemporal dynamics of local body parts. To address this, we introduce FineMotionโ€”the first large-scale motion dataset featuring *both* spatial fine-grained annotations (14 anatomically grounded joints) and temporal fine-grained alignment (motion segment-level text grounding), comprising 442K high-quality motion-text pairs. Leveraging FineMotion, we propose a text-guided local-global collaborative generation and editing framework, enabling zero-shot fine-grained motion control atop mainstream diffusion-based models (e.g., MDM). Experiments demonstrate a +15.3% improvement in Top-3 accuracy on fine-grained motion retrieval, significantly enhancing controllability and fidelity over body-part-specific kinematics and temporal logic. FineMotion establishes a new benchmark and technical pathway for high-precision, text-driven human motion synthesis and editing.

Technology Category

Application Category

๐Ÿ“ Abstract
Generating realistic human motions from textual descriptions has undergone significant advancements. However, existing methods often overlook specific body part movements and their timing. In this paper, we address this issue by enriching the textual description with more details. Specifically, we propose the FineMotion dataset, which contains over 442,000 human motion snippets - short segments of human motion sequences - and their corresponding detailed descriptions of human body part movements. Additionally, the dataset includes about 95k detailed paragraphs describing the movements of human body parts of entire motion sequences. Experimental results demonstrate the significance of our dataset on the text-driven finegrained human motion generation task, especially with a remarkable +15.3% improvement in Top-3 accuracy for the MDM model. Notably, we further support a zero-shot pipeline of fine-grained motion editing, which focuses on detailed editing in both spatial and temporal dimensions via text. Dataset and code available at: CVI-SZU/FineMotion
Problem

Research questions and friction points this paper is trying to address.

Lack of detailed body part movement annotations in motion generation
Insufficient temporal and spatial precision in existing motion editing methods
Need for richer text descriptions to improve fine-grained motion generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces FineMotion dataset with detailed annotations
Enhances text-driven motion generation accuracy
Supports zero-shot fine-grained motion editing
๐Ÿ”Ž Similar Papers
No similar papers found.