Dense Motion Captioning

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of dense temporal semantic parsing in 3D human motion understanding by introducing *Dense Motion Captioning*: localizing multiple action segments within continuous 3D motion sequences and generating natural language descriptions for each. To support this task, we present CompMo—the first large-scale, complex-action dataset comprising 60K multi-action 3D sequences with precise temporal boundary annotations. Methodologically, we propose DEMO, a lightweight motion adapter that bridges 3D motion features and large language models to enable efficient spatiotemporal–semantic alignment. Evaluated on CompMo and multiple established benchmarks, DEMO significantly outperforms prior approaches, establishing the first strong baseline for dense 3D motion captioning. This work advances fine-grained motion understanding by enabling precise, segment-level linguistic interpretation of continuous 3D human motion.

Technology Category

Application Category

📝 Abstract
Recent advances in 3D human motion and language integration have primarily focused on text-to-motion generation, leaving the task of motion understanding relatively unexplored. We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences. Current datasets fall short in providing detailed temporal annotations and predominantly consist of short sequences featuring few actions. To overcome these limitations, we present the Complex Motion Dataset (CompMo), the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries. Built through a carefully designed data generation pipeline, CompMo includes 60,000 motion sequences, each composed of multiple actions ranging from at least two to ten, accurately annotated with their temporal extents. We further present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions. Our experiments show that DEMO substantially outperforms existing methods on CompMo as well as on adapted benchmarks, establishing a robust baseline for future research in 3D motion understanding and captioning.
Problem

Research questions and friction points this paper is trying to address.

Temporally localize and caption actions in 3D human motion sequences
Address limitations of current datasets lacking detailed temporal annotations
Generate dense temporally grounded captions for complex motion sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset with temporal annotations for complex motions
Model combining large language model with motion adapter
Generating dense temporally grounded motion captions
🔎 Similar Papers
No similar papers found.