VideoCap-R1: Enhancing MLLMs for Video Captioning via Structured Thinking

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inaccurate action understanding remains a key challenge in video captioning. Method: This paper proposes a GRPO-based post-training framework that guides a multimodal large language model (Qwen2VL-7B) to first perform structured reasoning—decomposing scenes into subject, attribute, and action—before generating full captions. Contribution/Results: To our knowledge, this is the first application of GRPO to video multimodal captioning. We design a dual-reward mechanism comprising an LLM-free structured reasoning scorer and an LLM-augmented caption quality scorer, explicitly modeling the synergy between reasoning and generation. Evaluated on DREAM1K, VDC, and CAREBENCH, our method achieves substantial gains using only 1.5K training samples: +4.4 in event F1, +3.1 in action F1, and +6.9 in object F1 over strong baselines—demonstrating the effectiveness of structured reasoning guidance for multimodal video understanding.

Technology Category

Application Category

📝 Abstract
While recent advances in reinforcement learning have significantly enhanced reasoning capabilities in large language models (LLMs), these techniques remain underexplored in multi-modal LLMs for video captioning. This paper presents the first systematic investigation of GRPO-based RL post-training for video MLLMs, with the goal of enhancing video MLLMs' capability of describing actions in videos. Specifically, we develop the VideoCap-R1, which is prompted to first perform structured thinking that analyzes video subjects with their attributes and actions before generating complete captions, supported by two specialized reward mechanisms: a LLM-free think scorer evaluating the structured thinking quality and a LLM-assisted caption scorer assessing the output quality. The RL training framework effectively establishes the connection between structured reasoning and comprehensive description generation, enabling the model to produce captions with more accurate actions. Our experiments demonstrate that VideoCap-R1 achieves substantial improvements over the Qwen2VL-7B baseline using limited samples (1.5k) across multiple video caption benchmarks (DREAM1K: +4.4 event F1, VDC: +4.2 Acc, CAREBENCH: +3.1 action F1, +6.9 object F1) while consistently outperforming the SFT-trained counterparts, confirming GRPO's superiority in enhancing MLLMs' captioning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing video MLLMs' action description capability
Investigating GRPO-based RL post-training for video MLLMs
Connecting structured reasoning with comprehensive caption generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRPO-based RL post-training for video MLLMs
Structured thinking with subject-attribute-action analysis
Dual reward mechanisms for quality assessment
🔎 Similar Papers
No similar papers found.
Desen Meng
Desen Meng
Nanjing University
Computer VisionMultimodal Large Language Models
R
Rui Huang
Nanjing University
Z
Zhilin Dai
Nanjing University
Xinhao Li
Xinhao Li
Nanjing University
Video UnderstandingMultimodal LLMVision-Language Learning
Y
Yifan Xu
Nanjing University
J
Jun Zhang
Nanjing University
Z
Zhenpeng Huang
Nanjing University
M
Meng Zhang
Honor Device Co., Ltd
L
Lingshu Zhang
Honor Device Co., Ltd
Y
Yi Liu
Honor Device Co., Ltd
L
Limin Wang
Nanjing University, Shanghai AI Laboratory