VideoWeave: A Data-Centric Approach for Efficient Video Understanding

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency in video-language model training caused by the high computational cost of processing long videos and the scarcity of annotated data. The authors propose VideoWeave, a novel paradigm centered on data recombination: short video clips and their corresponding subtitles are stitched together—either randomly or via visual clustering—to synthesize long-context training samples. Coupled with subtitle-augmentation techniques to reconstruct video–text pairs, this approach enhances temporal diversity and data utilization without altering the model architecture or optimization objective. Under identical computational budgets, VideoWeave substantially improves accuracy on video question-answering benchmarks, demonstrating its effectiveness in leveraging limited resources for more efficient multimodal learning.

Technology Category

Application Category

📝 Abstract
Training video-language models is often prohibitively expensive due to the high cost of processing long frame sequences and the limited availability of annotated long videos. We present VideoWeave, a simple yet effective approach to improve data efficiency by constructing synthetic long-context training samples that splice together short, captioned videos from existing datasets. Rather than modifying model architectures or optimization objectives, VideoWeave reorganizes available video-text pairs to expand temporal diversity within fixed compute. We systematically study how different data composition strategies like random versus visually clustered splicing and caption enrichment affect downstream performance on downstream video question answering. Under identical compute constraints, models trained with VideoWeave achieve higher accuracy than conventional video finetuning. Our results highlight that reorganizing training data, rather than altering architectures, may offer a simple and scalable path for training video-language models. We link our code for all experiments here.
Problem

Research questions and friction points this paper is trying to address.

video-language models
data efficiency
long-context video
annotated video scarcity
training cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

VideoWeave
data-centric
video-language models
synthetic long-context
data efficiency
🔎 Similar Papers
No similar papers found.