MaskCaptioner : Learning to Jointly Segment and Caption Object Trajectories in Videos

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of joint modeling of detection, tracking, and natural language description—as well as high annotation costs—in Dense Video Object Captioning (DVOC). We propose the first end-to-end spatiotemporal instance-aware framework. Methodologically, we unify instance segmentation and trajectory captioning generation, introduce synthetically generated high-quality datasets—LVISCap and LV-VISCap—to enable cross-modal collaborative training and circumvent limitations of conventional pipeline-based approaches, and integrate state-of-the-art vision-language models, online tracking, and sequence generation modules for full-parameter end-to-end optimization. Our method achieves state-of-the-art performance on three major benchmarks—VidSTG, VLN, and BenSMOT—demonstrating significant improvements in caption accuracy and semantic richness. These results validate the effectiveness of trajectory-level fine-grained spatiotemporal understanding for DVOC.

Technology Category

Application Category

📝 Abstract
Dense Video Object Captioning (DVOC) is the task of jointly detecting, tracking, and captioning object trajectories in a video, requiring the ability to understand spatio-temporal details and describe them in natural language. Due to the complexity of the task and the high cost associated with manual annotation, previous approaches resort to disjoint training strategies, potentially leading to suboptimal performance. To circumvent this issue, we propose to generate captions about spatio-temporally localized entities leveraging a state-of-the-art VLM. By extending the LVIS and LV-VIS datasets with our synthetic captions (LVISCap and LV-VISCap), we train MaskCaptioner, an end-to-end model capable of jointly detecting, segmenting, tracking and captioning object trajectories. Moreover, with pretraining on LVISCap and LV-VISCap, MaskCaptioner achieves state-of-the-art DVOC results on three existing benchmarks, VidSTG, VLN and BenSMOT. The datasets and code are available at https://www.gabriel.fiastre.fr/maskcaptioner/.
Problem

Research questions and friction points this paper is trying to address.

Jointly segment and caption object trajectories in videos
Overcome disjoint training strategies in dense video captioning
Generate synthetic captions to train end-to-end DVOC models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly segments and captions object trajectories in videos
Uses synthetic captions from extended LVIS datasets for training
End-to-end model for detection, tracking, and captioning
🔎 Similar Papers