๐ค AI Summary
This work addresses the lack of proactive interaction capability in video multimodal large language models (Video MLLMs) under streaming scenarios. To this end, we propose a multi-round reinforcement learning framework that jointly optimizes response timing and content qualityโwithout requiring precise temporal annotations. Methodologically, we model proactive decision-making by jointly encoding visual frame sequences and dialogue history, eliminating hand-crafted response thresholds. Our two-stage training paradigm comprises supervised fine-tuning (SFT) followed by multi-round RL fine-tuning on a 52K-video dataset. To our knowledge, this is the first work to formulate proactive response generation as a text-to-text sequential decision task, enabling co-optimization of responsiveness and accuracy. On the ProactiveVideoQA benchmark, our approach achieves state-of-the-art performance, significantly outperforming existing proactive Video MLLM baselines.
๐ Abstract
Recent advances in video multimodal large language models (Video MLLMs) have significantly enhanced video understanding and multi-modal interaction capabilities. While most existing systems operate in a turn-based manner where the model can only reply after user turns, proactively deciding when to reply during video playback presents a promising yet challenging direction for real-time applications. In this work, we propose a novel text-to-text approach to proactive interaction, where the model autonomously determines whether to respond or remain silent at each turn based on dialogue history and visual context up to current frame of an streaming video. To overcome difficulties in previous methods such as manually tuning response decision thresholds and annotating precise reply times, we introduce a multi-turn RL based training method that encourages timely and accurate responses without requiring precise response time annotations. We train our model MMDuet2 on a dataset of 52k videos with two types of dialogues via SFT and RL. Experimental results demonstrate that MMDuet2 outperforms existing proactive Video MLLM baselines in response timing and quality, achieving state-of-the-art performance on the ProactiveVideoQA benchmark.