UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video captioning benchmarks and models over-rely on visual modalities, neglecting the critical role of audio in capturing dynamic scenes, speaker intent, and narrative context; moreover, they lack fine-grained multimodal description data and efficient lightweight models tailored for real-world user-generated content (UGC) short videos. Method: We introduce UGV-VideoCap, the first full-modality, fine-grained video captioning benchmark specifically designed for UGC short videos, featuring three-stage human annotation—including unimodal and cross-modal semantic alignment—and 4,000 multimodal question-answer pairs. We further propose UGC-VideoCaptioner, a 3B-parameter efficient model distilled from Gemini 2.5 Flash, trained via supervised fine-tuning followed by Grouped Relative Policy Optimization (GRPO). Contribution/Results: Our approach achieves high-quality audio-visual co-understanding and generation under few-shot settings, significantly improving multimodal video understanding performance and training efficiency on real-world scenarios.

Technology Category

Application Category

📝 Abstract
Real-world user-generated videos, especially on platforms like TikTok, often feature rich and intertwined audio visual content. However, existing video captioning benchmarks and models remain predominantly visual centric, overlooking the crucial role of audio in conveying scene dynamics, speaker intent, and narrative context. This lack of omni datasets and lightweight, capable models hampers progress in fine grained, multimodal video understanding. To address these challenges, we introduce UGC-VideoCap, a new benchmark and model framework specifically designed for detailed omnimodal captioning of short form user-generated videos. Unlike prior datasets, UGC-VideoCap emphasizes balanced integration of audio and visual modalities, featuring 1000 TikTok videos annotated through a structured three stage human-in-the-loop pipeline covering audio only, visual only, and joint audio visual semantics. The benchmark also includes 4000 carefully crafted QA pairs probing both unimodal and cross modal understanding. Alongside the dataset, we propose UGC-VideoCaptioner(3B), a 3B parameter captioning model distilled from Gemini 2.5 Flash. Using a novel two-stage training strategy supervised fine tuning followed by Group Relative Policy Optimization (GRPO), our approach enables efficient adaptation from limited data while maintaining competitive performance. Together, our benchmark and model offer a high-quality foundation and a data-efficient solution for advancing omnimodal video captioning in unconstrained real-world UGC settings.
Problem

Research questions and friction points this paper is trying to address.

Lack of audio-visual integration in video captioning benchmarks
Absence of lightweight models for fine-grained video understanding
Need for omni datasets in unconstrained UGC video settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Omnimodal integration of audio and visual data
Two-stage training with GRPO optimization
Lightweight 3B model distilled from Gemini
🔎 Similar Papers
No similar papers found.