Reinforcing Video Reasoning Segmentation to Think Before It Segments

πŸ“… 2025-08-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Video Reasoning Segmentation (VRS) faces challenges including inadequate implicit instruction understanding, weak spatiotemporal reasoning, and poor model interpretability. To address these, we propose Veason-R1, the first VRS model to introduce a β€œChain-of-Thought initialization + Group Relative Policy Optimization (GRPO)” training paradigm. It establishes a holistic reward mechanism integrating spatial alignment and temporal consistency, enabling structured spatiotemporal semantic reasoning prior to segmentation. Built upon large vision-language models, our method leverages Chain-of-Thought data construction, supervised fine-tuning (SFT), and GRPO-based reinforcement learning. On ReVOS and ReasonVOS benchmarks, Veason-R1 achieves +1.3 and +10.0 improvements in J&F scores, respectively, and boosts hallucination suppression rate (R) by +8.8. It significantly enhances key-frame localization, fine-grained visual grounding, and cross-scene generalization.

Technology Category

Application Category

πŸ“ Abstract
Video reasoning segmentation (VRS) endeavors to delineate referred objects in videos guided by implicit instructions that encapsulate human intent and temporal logic. Previous approaches leverage large vision language models (LVLMs) to encode object semantics into <SEG> tokens for mask prediction. However, this paradigm suffers from limited interpretability during inference and suboptimal performance due to inadequate spatiotemporal reasoning. Drawing inspiration from seminal breakthroughs in reinforcement learning, we introduce Veason-R1, a specialized LVLM for VRS that emphasizes structured reasoning in segmentation. Veason-R1 is trained through Group Relative Policy Optimization (GRPO) augmented with Chain-of-Thought (CoT) initialization. To begin with, we curate high-quality CoT training data to instill structured reasoning trajectories, bridging video-level semantics and frame-level spatial grounding, yielding the supervised fine-tuned model Veason-SFT. Subsequently, GRPO fine-tuning encourages efficient exploration of the reasoning space by optimizing reasoning chains. To this end, we incorporate a holistic reward mechanism that synergistically enhances spatial alignment and temporal consistency, bolstering keyframe localization and fine-grained grounding. Comprehensive empirical evaluations demonstrate that Veason-R1 achieves state-of-the-art performance on multiple benchmarks, surpassing prior art by significant margins (e.g., +1.3 J &F in ReVOS and +10.0 J &F in ReasonVOS), while exhibiting robustness to hallucinations (+8.8 R). Our code and model weights will be available at Veason-R1.
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability in video reasoning segmentation
Enhancing spatiotemporal reasoning for better segmentation
Optimizing reasoning chains to boost performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Group Relative Policy Optimization (GRPO)
Incorporates Chain-of-Thought (CoT) initialization
Enhances spatial alignment and temporal consistency
πŸ”Ž Similar Papers
No similar papers found.
S
Sitong Gong
IIAU Lab, Dalian University of Technology
L
Lu Zhang
IIAU Lab, Dalian University of Technology
Yunzhi Zhuge
Yunzhi Zhuge
Dalian University of Technology
Computer Vision
Xu Jia
Xu Jia
Associate Professor at Dalian University of Technology
Computer VisionMachine LearningBio-Inspired Vision
P
Pingping Zhang
IIAU Lab, Dalian University of Technology
H
Huchuan Lu
IIAU Lab, Dalian University of Technology