π€ AI Summary
Video Reasoning Segmentation (VRS) faces challenges including inadequate implicit instruction understanding, weak spatiotemporal reasoning, and poor model interpretability. To address these, we propose Veason-R1, the first VRS model to introduce a βChain-of-Thought initialization + Group Relative Policy Optimization (GRPO)β training paradigm. It establishes a holistic reward mechanism integrating spatial alignment and temporal consistency, enabling structured spatiotemporal semantic reasoning prior to segmentation. Built upon large vision-language models, our method leverages Chain-of-Thought data construction, supervised fine-tuning (SFT), and GRPO-based reinforcement learning. On ReVOS and ReasonVOS benchmarks, Veason-R1 achieves +1.3 and +10.0 improvements in J&F scores, respectively, and boosts hallucination suppression rate (R) by +8.8. It significantly enhances key-frame localization, fine-grained visual grounding, and cross-scene generalization.
π Abstract
Video reasoning segmentation (VRS) endeavors to delineate referred objects in videos guided by implicit instructions that encapsulate human intent and temporal logic. Previous approaches leverage large vision language models (LVLMs) to encode object semantics into <SEG> tokens for mask prediction. However, this paradigm suffers from limited interpretability during inference and suboptimal performance due to inadequate spatiotemporal reasoning. Drawing inspiration from seminal breakthroughs in reinforcement learning, we introduce Veason-R1, a specialized LVLM for VRS that emphasizes structured reasoning in segmentation. Veason-R1 is trained through Group Relative Policy Optimization (GRPO) augmented with Chain-of-Thought (CoT) initialization. To begin with, we curate high-quality CoT training data to instill structured reasoning trajectories, bridging video-level semantics and frame-level spatial grounding, yielding the supervised fine-tuned model Veason-SFT. Subsequently, GRPO fine-tuning encourages efficient exploration of the reasoning space by optimizing reasoning chains. To this end, we incorporate a holistic reward mechanism that synergistically enhances spatial alignment and temporal consistency, bolstering keyframe localization and fine-grained grounding. Comprehensive empirical evaluations demonstrate that Veason-R1 achieves state-of-the-art performance on multiple benchmarks, surpassing prior art by significant margins (e.g., +1.3 J &F in ReVOS and +10.0 J &F in ReasonVOS), while exhibiting robustness to hallucinations (+8.8 R). Our code and model weights will be available at Veason-R1.