๐ค AI Summary
Current video foundation models face bottlenecks in fine-grained spatiotemporal understanding, particularly in unifying video referring expression comprehension (region-level semantic capture) and video grounding (language-guided spatiotemporal localization). Existing approaches typically treat these tasks in isolation and suffer from a lack of high-quality unified instruction data and comprehensive evaluation benchmarks. To address this, we propose SAMA: the first framework that jointly optimizes referring comprehension, spatiotemporal grounding, and multi-turn dialogue through a synergistic learning paradigm. We introduce SAMA-239Kโthe first large-scale, unified video instruction datasetโand SAMA-Bench, a comprehensive evaluation benchmark covering diverse spatiotemporal reasoning capabilities. Furthermore, we design an end-to-end model integrating a spatiotemporal context aggregator with Segment Anything Model (SAM) for joint semantic understanding and precise spatiotemporal segmentation. SAMA achieves state-of-the-art performance on SAMA-Bench and general benchmarks (e.g., Ref-Youtube-VOS), while retaining strong general visual understanding capabilities.
๐ Abstract
Achieving fine-grained spatio-temporal understanding in videos remains a major challenge for current Video Large Multimodal Models (Video LMMs). Addressing this challenge requires mastering two core capabilities: video referring understanding, which captures the semantics of video regions, and video grounding, which segments object regions based on natural language descriptions. However, most existing approaches tackle these tasks in isolation, limiting progress toward unified, referentially grounded video interaction. We identify a key bottleneck in the lack of high-quality, unified video instruction data and a comprehensive benchmark for evaluating referentially grounded video chat. To address these challenges, we contribute in three core aspects: dataset, model, and benchmark. First, we introduce SAMA-239K, a large-scale dataset comprising 15K videos specifically curated to enable joint learning of video referring understanding, grounding, and multi-turn video chat. Second, we propose the SAMA model, which incorporates a versatile spatio-temporal context aggregator and a Segment Anything Model to jointly enhance fine-grained video comprehension and precise grounding capabilities. Finally, we establish SAMA-Bench, a meticulously designed benchmark consisting of 5,067 questions from 522 videos, to comprehensively evaluate the integrated capabilities of Video LMMs in multi-turn, spatio-temporal referring understanding and grounded dialogue. Extensive experiments and benchmarking results show that SAMA not only achieves strong performance on SAMA-Bench but also sets a new state-of-the-art on general grounding benchmarks, while maintaining highly competitive performance on standard visual understanding benchmarks.