🤖 AI Summary
This work addresses the limitations of existing text-driven multi-shot video generation methods, which suffer from imprecise camera control and reliance on manually specified trajectories—leading to high costs and frequent failures. To overcome these challenges, we propose a “plan-and-control” framework that first leverages a vision-language model (VLM) to plan a globally consistent camera trajectory and then employs a camera adapter to generate multi-shot videos. We construct a well-aligned dataset of (caption, trajectory, video) triplets, design an automated multi-shot camera calibration pipeline with a unified coordinate system, and introduce a three-track evaluation protocol. Experiments on ShotVerse-Bench demonstrate that our approach can automatically generate multi-shot videos exhibiting cinematic aesthetics, accurate camera trajectories, and strong cross-shot consistency.
📝 Abstract
Text-driven video generation has democratized film creation, but camera control in cinematic multi-shot scenarios remains a significant block. Implicit textual prompts lack precision, while explicit trajectory conditioning imposes prohibitive manual overhead and often triggers execution failures in current models. To overcome this bottleneck, we propose a data-centric paradigm shift, positing that aligned (Caption, Trajectory, Video) triplets form an inherent joint distribution that can connect automated plotting and precise execution. Guided by this insight, we present ShotVerse, a "Plan-then-Control" framework that decouples generation into two collaborative agents: a VLM (Vision-Language Model)-based Planner that leverages spatial priors to obtain cinematic, globally aligned trajectories from text, and a Controller that renders these trajectories into multi-shot video content via a camera adapter. Central to our approach is the construction of a data foundation: we design an automated multi-shot camera calibration pipeline aligns disjoint single-shot trajectories into a unified global coordinate system. This facilitates the curation of ShotVerse-Bench, a high-fidelity cinematic dataset with a three-track evaluation protocol that serves as the bedrock for our framework. Extensive experiments demonstrate that ShotVerse effectively bridges the gap between unreliable textual control and labor-intensive manual plotting, achieving superior cinematic aesthetics and generating multi-shot videos that are both camera-accurate and cross-shot consistent.