MovieLLM: Enhancing Long Video Understanding with AI-Generated Movies

📅 2024-03-03
🏛️ arXiv.org
📈 Citations: 21
Influential: 2
📄 PDF
🤖 AI Summary
To address the scarcity of real-world long-video data and the high cost of manual annotation in long-video understanding, this paper introduces the first AI-synthesized movie framework tailored for instruction tuning of Long-Video Large Models (LVLMs). Methodologically, it pioneers a novel video generation paradigm integrating textual inversion with GPT-4–driven collaborative control: textual inversion enables fine-grained stylistic customization; GPT-4 generates structured scripts and shot-level planning; and multi-stage consistency constraints enhance keyframe coherence. Concurrently, the framework auto-generates stylistically consistent keyframes and corresponding question-answer pairs. Experiments demonstrate substantial improvements in LVLM performance on complex narrative tasks—including long-video question answering and plot reasoning—while mitigating data bias and coverage gaps. The approach outperforms state-of-the-art methods across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Development of multimodal models has marked a significant step forward in how machines understand videos. These models have shown promise in analyzing short video clips. However, when it comes to longer formats like movies, they often fall short. The main hurdles are the lack of high-quality, diverse video data and the intensive work required to collect or annotate such data. In face of these challenges, we propose MovieLLM, a novel framework designed to synthesize consistent and high-quality video data for instruction tuning. The pipeline is carefully designed to control the style of videos by improving textual inversion technique with powerful text generation capability of GPT-4. As the first framework to do such thing, our approach stands out for its flexibility and scalability, empowering users to create customized movies with only one description. This makes it a superior alternative to traditional data collection methods. Our extensive experiments validate that the data produced by MovieLLM significantly improves the performance of multimodal models in understanding complex video narratives, overcoming the limitations of existing datasets regarding scarcity and bias.
Problem

Research questions and friction points this paper is trying to address.

Automating video dataset creation for LVLM fine-tuning
Ensuring style consistency in generated video keyframes
Generating diverse QA pairs for LVLM instruction tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM generates structured movie plots and QA pairs
Style Immobilization ensures consistent frame style
Integrates descriptions and embeddings for coherent keyframes
🔎 Similar Papers
Z
Zhende Song
Fudan University
Chenchen Wang
Chenchen Wang
Director, Center for Complementary and Integrative Medicine, Tufts Medical Center
Complementary and Integrative Medicine
J
Jiamu Sheng
Fudan University
C
C. Zhang
Tencent PCG
G
Gang Yu
Tencent PCG
Jiayuan Fan
Jiayuan Fan
Fudan University
Computer visionMachine learning
T
Tao Chen
Fudan University