LongVPO: From Anchored Cues to Self-Reasoning for Long-Form Video Preference Optimization

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of limited long-video annotations, which hinders short-context vision-language models from comprehending ultra-long videos. To overcome this, the authors propose a two-stage unsupervised preference optimization framework. In the first stage, key short video clips are anchored to generate and filter synthetic preference triplets. The second stage leverages recursive captioning and large language models to construct multi-segment reasoning tasks, enabling self-supervised preference alignment over long videos. Requiring no human annotations and using only 16K synthetic samples, the method outperforms current open-source state-of-the-art approaches on multiple long-video benchmarks while maintaining strong performance on short-video tasks such as MVBench, significantly enhancing semantic understanding of ultra-long videos.

Technology Category

Application Category

📝 Abstract
We present LongVPO, a novel two-stage Direct Preference Optimization framework that enables short-context vision-language models to robustly understand ultra-long videos without any long-video annotations. In Stage 1, we synthesize preference triples by anchoring questions to individual short clips, interleaving them with distractors, and applying visual-similarity and question-specificity filtering to mitigate positional bias and ensure unambiguous supervision. We also approximate the reference model's scoring over long contexts by evaluating only the anchor clip, reducing computational overhead. In Stage 2, we employ a recursive captioning pipeline on long videos to generate scene-level metadata, then use a large language model to craft multi-segment reasoning queries and dispreferred responses, aligning the model's preferences through multi-segment reasoning tasks. With only 16K synthetic examples and no costly human labels, LongVPO outperforms the state-of-the-art open-source models on multiple long-video benchmarks, while maintaining strong short-video performance (e.g., on MVBench), offering a scalable paradigm for efficient long-form video understanding.
Problem

Research questions and friction points this paper is trying to address.

long-form video understanding
preference optimization
vision-language models
annotation-free learning
video reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization
Long-Form Video Understanding
Synthetic Preference Data
Recursive Captioning
Multi-Segment Reasoning
🔎 Similar Papers
No similar papers found.
Z
Zhenpeng Huang
State Key Laboratory for Novel Software Technology, Nanjing University
Jiaqi Li
Jiaqi Li
Unknown affiliation
Machine LearningDeep Learning
Z
Zihan Jia
State Key Laboratory for Novel Software Technology, Nanjing University
Xinhao Li
Xinhao Li
Nanjing University
Video UnderstandingMultimodal LLMVision-Language Learning
Desen Meng
Desen Meng
Nanjing University
Computer VisionMultimodal Large Language Models
L
Lingxue Song
JIUTIAN Research
X
Xi Chen
JIUTIAN Research
L
Liang Li
JIUTIAN Research
Limin Wang
Limin Wang
Nanjing University
Computer VisionAction RecognitionVideo Understanding