🤖 AI Summary
To address the challenges of weak video reasoning capabilities, scarcity of high-quality reasoning data, and absence of effective training paradigms, this work introduces two novel video reasoning benchmarks: DarkEventInfer (event-masking inference) and MixVidQA (cross-clip interference question answering). It pioneers the extension of the Reason-Then-Respond paradigm to general-purpose multimodal video reasoning, supporting multiple-choice and open-ended QA as well as video captioning. Methodologically, we propose spatiotemporal feature disentangled encoding, context-aware event completion, and a multi-stage reinforcement learning fine-tuning strategy guided by diversity-based rewards. Our approach achieves comprehensive performance gains across three major evaluation categories—video understanding, cognitive reasoning, and captioning—outperforming all existing methods and establishing new state-of-the-art results on multiple metrics.
📝 Abstract
Recent advancements in multimodal large language models have successfully extended the Reason-Then-Respond paradigm to image-based reasoning, yet video-based reasoning remains an underdeveloped frontier, primarily due to the scarcity of high-quality reasoning-oriented data and effective training methodologies. To bridge this gap, we introduce DarkEventInfer and MixVidQA, two novel datasets specifically designed to stimulate the model's advanced video understanding and reasoning abilities. DarkEventinfer presents videos with masked event segments, requiring models to infer the obscured content based on contextual video cues. MixVidQA, on the other hand, presents interleaved video sequences composed of two distinct clips, challenging models to isolate and reason about one while disregarding the other. Leveraging these carefully curated training samples together with reinforcement learning guided by diverse reward functions, we develop VersaVid-R1, the first versatile video understanding and reasoning model under the Reason-Then-Respond paradigm capable of handling multiple-choice and open-ended question answering, as well as video captioning tasks. Extensive experiments demonstrate that VersaVid-R1 significantly outperforms existing models across a broad spectrum of benchmarks, covering video general understanding, cognitive reasoning, and captioning tasks.