🤖 AI Summary
Current large multimodal models (MLLMs) face significant limitations in fine-grained video-based human action and pose understanding due to the high cost and poor scalability of densely annotated data. To address this, we introduce ActionArt—the first fine-grained action-pose description dataset comprising thousands of human-object interaction videos with precise limb-motion annotations, accompanied by a comprehensive benchmark covering eight distinct subtasks. We propose a novel weakly supervised training paradigm grounded in self-generated agent tasks, wherein an MLLM automatically synthesizes spatiotemporally aware enhancement signals to enable joint spatial-temporal feature distillation and action-pose co-modeling—substantially reducing reliance on manual annotations. Experiments demonstrate that our approach achieves a 12.7% average accuracy gain across all eight subtasks, approaching the performance upper bound of fully supervised methods and markedly narrowing the gap with human-annotated benchmarks.
📝 Abstract
Fine-grained understanding of human actions and poses in videos is essential for human-centric AI applications. In this work, we introduce ActionArt, a fine-grained video-caption dataset designed to advance research in human-centric multimodal understanding. Our dataset comprises thousands of videos capturing a broad spectrum of human actions, human-object interactions, and diverse scenarios, each accompanied by detailed annotations that meticulously label every limb movement. We develop eight sub-tasks to evaluate the fine-grained understanding capabilities of existing large multimodal models across different dimensions. Experimental results indicate that, while current large multimodal models perform commendably on various tasks, they often fall short in achieving fine-grained understanding. We attribute this limitation to the scarcity of meticulously annotated data, which is both costly and difficult to scale manually. Since manual annotations are costly and hard to scale, we propose proxy tasks to enhance the model perception ability in both spatial and temporal dimensions. These proxy tasks are carefully crafted to be driven by data automatically generated from existing MLLMs, thereby reducing the reliance on costly manual labels. Experimental results show that the proposed proxy tasks significantly narrow the gap toward the performance achieved with manually annotated fine-grained data.