🤖 AI Summary
Current multimodal large language models (MLLMs) for human motion understanding suffer from a unidirectional instruction-following paradigm, lacking interactivity and dynamic adaptability for multi-perspective analysis. To address this, we propose the first multimodal multi-agent framework tailored for motion analysis, establishing a closed-loop architecture comprising intent-driven reasoning, task decomposition, and modular coordination—thereby introducing the first interactive, multi-perspective motion analysis paradigm. We design MotionCore, a dedicated motion representation module enabling on-demand activation and functional decoupling. Our framework integrates MLLMs, collaborative agent architectures, and modular interface technologies. Evaluated across diverse motion understanding tasks, it achieves accuracy gains of 12.3%–28.7% over state-of-the-art baselines, while significantly improving user engagement and analytical flexibility.
📝 Abstract
Advancements in Multimodal Large Language Models (MLLMs) have improved human motion understanding. However, these models remain constrained by their"instruct-only"nature, lacking interactivity and adaptability for diverse analytical perspectives. To address these challenges, we introduce ChatMotion, a multimodal multi-agent framework for human motion analysis. ChatMotion dynamically interprets user intent, decomposes complex tasks into meta-tasks, and activates specialized function modules for motion comprehension. It integrates multiple specialized modules, such as the MotionCore, to analyze human motion from various perspectives. Extensive experiments demonstrate ChatMotion's precision, adaptability, and user engagement for human motion understanding.