🤖 AI Summary
This work addresses the limitation of single-modality approaches—using either video or motion data alone—in comprehensively capturing human behavioral semantics and fine-grained actions. To this end, we propose ViMoNet, a novel multimodal joint-training framework that, for the first time, simultaneously models high-fidelity 3D motion sequences and general-purpose video spatiotemporal features, while leveraging large language models to achieve cross-modal semantic alignment. Complementing this, we introduce VIMOS, a new multimodal dataset featuring dual-track annotations: motion–text and video–text pairs. Extensive experiments demonstrate that ViMoNet significantly outperforms state-of-the-art methods on behavior captioning, action understanding, and semantic reasoning tasks. Furthermore, we establish ViMoNet-Bench—a dedicated benchmark for fine-grained behavior understanding—which validates ViMoNet’s strong generalization capability and robustness across diverse scenarios.
📝 Abstract
This study investigates how large language models (LLMs) can be used to understand human behavior using motion and video data. We think that mixing both types is essential to completely capture the nuanced movements and meanings of human actions, in contrast to recent models that simply concentrate on motion data or films. To address this, we provide ViMoNet, a straightforward yet effective framework for comprehending, characterizing, and deducing human action. ViMoNet employs a joint training strategy that leverages the advantages of two data types: detailed motion-text data, which is more exact, and generic video-text data, which is more comprehensive but less detailed. This aids in the model's acquisition of rich data regarding time and space in human behavior. Additionally, we provide a brand new dataset named VIMOS that contains a variety of films, motion sequences, instructions, and subtitles. We developed ViMoNet-Bench, a standardized benchmark with carefully labeled samples, to evaluate how well models understand human behavior. Our tests show that ViMoNet outperforms existing methods in caption generation, motion understanding, and behavior interpretation.