VideoChat-M1: Collaborative Policy Planning for Video Understanding via Multi-Agent Reinforcement Learning

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-agent video understanding frameworks rely on static, non-learnable tool invocation mechanisms, limiting robust spatiotemporal perception and reasoning for complex videos. To address this, we propose Collaborative Strategy Planning (CSP), the first learnable multi-agent reinforcement learning framework for video understanding. CSP enables end-to-end optimization of tool invocation through dynamic policy adaptation, multi-stage collaborative feedback, and interactive updates driven by tool-augmented multimodal large language models. Its core innovations include jointly modeling strategy generation, execution, and inter-agent communication as a unified learnable process, and introducing a dynamic communication gating mechanism. Evaluated across eight diverse video understanding benchmarks, CSP achieves new state-of-the-art performance: on LongVideoBench, it outperforms Gemini 2.5 Pro by 3.6% and GPT-4o by 15.6%.

Technology Category

Application Category

📝 Abstract
By leveraging tool-augmented Multimodal Large Language Models (MLLMs), multi-agent frameworks are driving progress in video understanding. However, most of them adopt static and non-learnable tool invocation mechanisms, which limit the discovery of diverse clues essential for robust perception and reasoning regarding temporally or spatially complex videos. To address this challenge, we propose a novel Multi-agent system for video understanding, namely VideoChat-M1. Instead of using a single or fixed policy, VideoChat-M1 adopts a distinct Collaborative Policy Planning (CPP) paradigm with multiple policy agents, which comprises three key processes. (1) Policy Generation: Each agent generates its unique tool invocation policy tailored to the user's query; (2) Policy Execution: Each agent sequentially invokes relevant tools to execute its policy and explore the video content; (3) Policy Communication: During the intermediate stages of policy execution, agents interact with one another to update their respective policies. Through this collaborative framework, all agents work in tandem, dynamically refining their preferred policies based on contextual insights from peers to effectively respond to the user's query. Moreover, we equip our CPP paradigm with a concise Multi-Agent Reinforcement Learning (MARL) method. Consequently, the team of policy agents can be jointly optimized to enhance VideoChat-M1's performance, guided by both the final answer reward and intermediate collaborative process feedback. Extensive experiments demonstrate that VideoChat-M1 achieves SOTA performance across eight benchmarks spanning four tasks. Notably, on LongVideoBench, our method outperforms the SOTA model Gemini 2.5 pro by 3.6% and GPT-4o by 15.6%.
Problem

Research questions and friction points this paper is trying to address.

Addresses static tool invocation in video understanding systems
Enables dynamic policy refinement through multi-agent collaboration
Optimizes collaborative agents using multi-agent reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Policy Planning with multiple dynamic agents
Multi-Agent Reinforcement Learning for joint optimization
Interactive policy communication during execution stages
🔎 Similar Papers
No similar papers found.
Boyu Chen
Boyu Chen
The University of Sydney
Neural Architecture SearchTransformer
Zikang Wang
Zikang Wang
Institute of Automation, Chinese Academy of Sciences
Zhengrong Yue
Zhengrong Yue
Shanghai Jiao Tong University, PhD
Unified Multimodal ModelingVideo UnderstandingVideo Generation
K
Kainan Yan
Shenzhen Key Lab of Computer Vision and Pattern Recognition, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Chenyun Yu
Chenyun Yu
phd, Department of Computer Science, City University of Hong Kong
Data science and managementquery optimizationdata mininginformation security
Y
Yi Huang
VIVO AI Lab
Zijun Liu
Zijun Liu
Tsinghua University
LLMAgentMachine TranslationAIGC
Y
Yafei Wen
VIVO AI Lab
Xiaoxin Chen
Xiaoxin Chen
Coriell Institute for Medical Research
Y
Yang Liu
Shanghai Artificial Intelligence Laboratory
P
Peng Li
Institute for AI Industry Research (AIR), Tsinghua University
Y
Yali Wang
Shanghai Artificial Intelligence Laboratory