🤖 AI Summary
Existing video captioning methods struggle to balance abstract description with object-level fine-grained accuracy, while lacking spatiotemporal consistency and interactive flexibility. To address this, we propose the first training-free spatiotemporal multimodal prompting framework, enabling users to interactively select targets via points, bounding boxes, or arbitrary regions and track their attributes, actions, states, interactions, and environmental context across frames. Our method integrates SAMURAI for segmentation, TRACE-Uni for temporal modeling, and InternVL-2.5 for multimodal understanding, augmented with event-boundary awareness and chain-of-reasoning mechanisms. Evaluated on multiple benchmarks, it achieves zero-shot state-of-the-art performance in object-level video captioning, significantly improving spatial precision and temporal coherence. Moreover, it supports flexible user interaction and cross-temporal state modeling—enabling precise, controllable, and temporally grounded video understanding without any fine-tuning.
📝 Abstract
We present CAT-V (Caption AnyThing in Video), a training-free framework for fine-grained object-centric video captioning that enables detailed descriptions of user-selected objects through time. CAT-V integrates three key components: a Segmenter based on SAMURAI for precise object segmentation across frames, a Temporal Analyzer powered by TRACE-Uni for accurate event boundary detection and temporal analysis, and a Captioner using InternVL-2.5 for generating detailed object-centric descriptions. Through spatiotemporal visual prompts and chain-of-thought reasoning, our framework generates detailed, temporally-aware descriptions of objects' attributes, actions, statuses, interactions, and environmental contexts without requiring additional training data. CAT-V supports flexible user interactions through various visual prompts (points, bounding boxes, and irregular regions) and maintains temporal sensitivity by tracking object states and interactions across different time segments. Our approach addresses limitations of existing video captioning methods, which either produce overly abstract descriptions or lack object-level precision, enabling fine-grained, object-specific descriptions while maintaining temporal coherence and spatial accuracy. The GitHub repository for this project is available at https://github.com/yunlong10/CAT-V