ViMoNet: A Multimodal Vision-Language Framework for Human Behavior Understanding from Motion and Video

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of single-modality approaches—using either video or motion data alone—in comprehensively capturing human behavioral semantics and fine-grained actions. To this end, we propose ViMoNet, a novel multimodal joint-training framework that, for the first time, simultaneously models high-fidelity 3D motion sequences and general-purpose video spatiotemporal features, while leveraging large language models to achieve cross-modal semantic alignment. Complementing this, we introduce VIMOS, a new multimodal dataset featuring dual-track annotations: motion–text and video–text pairs. Extensive experiments demonstrate that ViMoNet significantly outperforms state-of-the-art methods on behavior captioning, action understanding, and semantic reasoning tasks. Furthermore, we establish ViMoNet-Bench—a dedicated benchmark for fine-grained behavior understanding—which validates ViMoNet’s strong generalization capability and robustness across diverse scenarios.

Technology Category

Application Category

📝 Abstract
This study investigates how large language models (LLMs) can be used to understand human behavior using motion and video data. We think that mixing both types is essential to completely capture the nuanced movements and meanings of human actions, in contrast to recent models that simply concentrate on motion data or films. To address this, we provide ViMoNet, a straightforward yet effective framework for comprehending, characterizing, and deducing human action. ViMoNet employs a joint training strategy that leverages the advantages of two data types: detailed motion-text data, which is more exact, and generic video-text data, which is more comprehensive but less detailed. This aids in the model's acquisition of rich data regarding time and space in human behavior. Additionally, we provide a brand new dataset named VIMOS that contains a variety of films, motion sequences, instructions, and subtitles. We developed ViMoNet-Bench, a standardized benchmark with carefully labeled samples, to evaluate how well models understand human behavior. Our tests show that ViMoNet outperforms existing methods in caption generation, motion understanding, and behavior interpretation.
Problem

Research questions and friction points this paper is trying to address.

Combining motion and video data for human behavior understanding
Developing a multimodal framework for action comprehension and inference
Creating a new dataset and benchmark for behavior analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines motion and video data for behavior analysis
Uses joint training with motion-text and video-text
Introduces new dataset VIMOS for evaluation
🔎 Similar Papers
No similar papers found.
Rajan Das Gupta
Rajan Das Gupta
B.Sc in CSE (AIUB), M.Sc in CS (JU)
Health InformaticsAI in HealthcareComputer VisionLLMNLP
M
Md Yeasin Rahat
Department of Computer Science, AIUB, Dhaka, Bangladesh
Nafiz Fahad
Nafiz Fahad
EliteLab. AI, USA || FACULTY OF INFORMATION SCIENCE AND TECHNOLOGY, Multimedia University, Malaysia
Health InformaticsAI In HealthcareAIComputer VisionNLP
A
Abir Ahmed
Department of Computer Science, AIUB, Dhaka, Bangladesh
L
Liew Tze Hui
Department of Information Technology, Washington University of Science & Technology, Virginia, USA; Centre for Intelligent Cloud Computing, Faculty of Information Science and Technology, Multimedia University, Melaka, Malaysia