KPM-Bench: A Kinematic Parsing Motion Benchmark for Fine-grained Motion-centric Video Understanding

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video captioning models often struggle with fine-grained motion representation and are prone to action-related hallucinations. To address this, this work proposes Motion Parsing and Extraction (MoPE), an algorithm that uniquely integrates kinematic modeling with linguistic structure parsing to establish a large-model-free framework for evaluating motion hallucinations. Leveraging this approach, we introduce KPM-Bench—the first open-source benchmark specifically designed for limb-level dynamic understanding—comprising fine-grained video–text pairs, action comprehension question answering, and a dedicated hallucination evaluation set. Experimental results demonstrate that MoPE significantly mitigates motion hallucinations and enhances both accuracy and reliability in fine-grained action understanding tasks.

Technology Category

Application Category

📝 Abstract
Despite recent advancements, video captioning models still face significant limitations in accurately describing fine-grained motion details and suffer from severe hallucination issues. These challenges become particularly prominent when generating captions for motion-centric videos, where precise depiction of intricate movements and limb dynamics is crucial yet often neglected. To alleviate this gap, we introduce an automated annotation pipeline that integrates kinematic-based motion computation with linguistic parsing, enabling detailed decomposition and description of complex human motions. Based on this pipeline, we construct and release the Kinematic Parsing Motion Benchmark (KPM-Bench), a novel open-source dataset designed to facilitate fine-grained motion understanding. KPM-Bench consists of (i) fine-grained video-caption pairs that comprehensively illustrate limb-level dynamics in complex actions, (ii) diverse and challenging question-answer pairs focusing specifically on motion understanding, and (iii) a meticulously curated evaluation set specifically designed to assess hallucination phenomena associated with motion descriptions. Furthermore, to address hallucination issues systematically, we propose the linguistically grounded Motion Parsing and Extraction (MoPE) algorithm, capable of accurately extracting motion-specific attributes directly from textual captions. Leveraging MoPE, we introduce a precise hallucination evaluation metric that functions independently of large-scale vision-language or language-only models. By integrating MoPE into the GRPO post-training framework, we effectively mitigate hallucination problems, significantly improving the reliability of motion-centric video captioning models.
Problem

Research questions and friction points this paper is trying to address.

fine-grained motion understanding
video captioning
hallucination
motion-centric video
kinematic parsing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kinematic Parsing
Motion-centric Video Understanding
Hallucination Mitigation
MoPE Algorithm
Fine-grained Captioning
🔎 Similar Papers
No similar papers found.
B
Boda Lin
Kuaishou Technology
Y
Yongjie Zhu
Kuaishou Technology
X
Xiaocheng Gong
Kuaishou Technology
Wenyu Qin
Wenyu Qin
Harbin Institute of Technology
Control
M
Meng Wang
Kuaishou Technology