π€ AI Summary
This work addresses the limitations of multimodal large language models in video understanding, which often suffer from reasoning drift and weak temporal modeling, compounded by existing reinforcement learning approaches that rely on supervised fine-tuning and fixed reasoning paths, leading to poor generalization. To overcome these challenges, we propose the Summary-Driven Reinforcement Learning (SDRL) framework, which optimizes the reasoning process in a single training stage without supervised fine-tuning through a structured chain-of-thought paradigm: βsummarize β reason β answer.β SDRL introduces a novel self-supervised mechanism featuring Cross-modal Visual Knowledge consistency (CVK) to ensure factual accuracy and Dynamic reasoning Variety Regulation (DVR) to adaptively modulate exploration intensity based on ensemble accuracy. Combined with a GRPO objective and KL divergence constraints, this approach enables end-to-end optimization of structured reasoning, achieving state-of-the-art performance across seven VideoQA benchmarks and significantly enhancing both reasoning accuracy and robustness.
π Abstract
Multi-modal Large Language Models (MLLMs) show promise in video understanding. However, their reasoning often suffers from thinking drift and weak temporal comprehension, even when enhanced by Reinforcement Learning (RL) techniques like Group Relative Policy Optimization (GRPO). Moreover, existing RL methods usually depend on Supervised Fine-Tuning (SFT), which requires costly Chain-of-Thought (CoT) annotation and multi-stage training, and enforces fixed reasoning paths, limiting MLLMs' ability to generalize and potentially inducing bias. To overcome these limitations, we introduce Summary-Driven Reinforcement Learning (SDRL), a novel single-stage RL framework that obviates the need for SFT by utilizing a Structured CoT format: Summarize -> Think -> Answer. SDRL introduces two self-supervised mechanisms integrated into the GRPO objective: 1) Consistency of Vision Knowledge (CVK) enforces factual grounding by reducing KL divergence among generated summaries; and 2) Dynamic Variety of Reasoning (DVR) promotes exploration by dynamically modulating thinking diversity based on group accuracy. This novel integration effectively balances alignment and exploration, supervising both the final answer and the reasoning process. Our method achieves state-of-the-art performance on seven public VideoQA datasets.