🤖 AI Summary
To address the scarcity of real-world long-video data and the high cost of manual annotation in long-video understanding, this paper introduces the first AI-synthesized movie framework tailored for instruction tuning of Long-Video Large Models (LVLMs). Methodologically, it pioneers a novel video generation paradigm integrating textual inversion with GPT-4–driven collaborative control: textual inversion enables fine-grained stylistic customization; GPT-4 generates structured scripts and shot-level planning; and multi-stage consistency constraints enhance keyframe coherence. Concurrently, the framework auto-generates stylistically consistent keyframes and corresponding question-answer pairs. Experiments demonstrate substantial improvements in LVLM performance on complex narrative tasks—including long-video question answering and plot reasoning—while mitigating data bias and coverage gaps. The approach outperforms state-of-the-art methods across multiple benchmarks.
📝 Abstract
Development of multimodal models has marked a significant step forward in how machines understand videos. These models have shown promise in analyzing short video clips. However, when it comes to longer formats like movies, they often fall short. The main hurdles are the lack of high-quality, diverse video data and the intensive work required to collect or annotate such data. In face of these challenges, we propose MovieLLM, a novel framework designed to synthesize consistent and high-quality video data for instruction tuning. The pipeline is carefully designed to control the style of videos by improving textual inversion technique with powerful text generation capability of GPT-4. As the first framework to do such thing, our approach stands out for its flexibility and scalability, empowering users to create customized movies with only one description. This makes it a superior alternative to traditional data collection methods. Our extensive experiments validate that the data produced by MovieLLM significantly improves the performance of multimodal models in understanding complex video narratives, overcoming the limitations of existing datasets regarding scarcity and bias.