Reinforcing Structured Chain-of-Thought for Video Understanding

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of multimodal large language models in video understanding, which often suffer from reasoning drift and weak temporal modeling, compounded by existing reinforcement learning approaches that rely on supervised fine-tuning and fixed reasoning paths, leading to poor generalization. To overcome these challenges, we propose the Summary-Driven Reinforcement Learning (SDRL) framework, which optimizes the reasoning process in a single training stage without supervised fine-tuning through a structured chain-of-thought paradigm: β€œsummarize β†’ reason β†’ answer.” SDRL introduces a novel self-supervised mechanism featuring Cross-modal Visual Knowledge consistency (CVK) to ensure factual accuracy and Dynamic reasoning Variety Regulation (DVR) to adaptively modulate exploration intensity based on ensemble accuracy. Combined with a GRPO objective and KL divergence constraints, this approach enables end-to-end optimization of structured reasoning, achieving state-of-the-art performance across seven VideoQA benchmarks and significantly enhancing both reasoning accuracy and robustness.
πŸ“ Abstract
Multi-modal Large Language Models (MLLMs) show promise in video understanding. However, their reasoning often suffers from thinking drift and weak temporal comprehension, even when enhanced by Reinforcement Learning (RL) techniques like Group Relative Policy Optimization (GRPO). Moreover, existing RL methods usually depend on Supervised Fine-Tuning (SFT), which requires costly Chain-of-Thought (CoT) annotation and multi-stage training, and enforces fixed reasoning paths, limiting MLLMs' ability to generalize and potentially inducing bias. To overcome these limitations, we introduce Summary-Driven Reinforcement Learning (SDRL), a novel single-stage RL framework that obviates the need for SFT by utilizing a Structured CoT format: Summarize -> Think -> Answer. SDRL introduces two self-supervised mechanisms integrated into the GRPO objective: 1) Consistency of Vision Knowledge (CVK) enforces factual grounding by reducing KL divergence among generated summaries; and 2) Dynamic Variety of Reasoning (DVR) promotes exploration by dynamically modulating thinking diversity based on group accuracy. This novel integration effectively balances alignment and exploration, supervising both the final answer and the reasoning process. Our method achieves state-of-the-art performance on seven public VideoQA datasets.
Problem

Research questions and friction points this paper is trying to address.

Video Understanding
Chain-of-Thought
Reinforcement Learning
Multi-modal Large Language Models
Temporal Comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Summary-Driven Reinforcement Learning
Structured Chain-of-Thought
Consistency of Vision Knowledge
Dynamic Variety of Reasoning
Multi-modal Large Language Models
πŸ”Ž Similar Papers
No similar papers found.