ShotVerse: Advancing Cinematic Camera Control for Text-Driven Multi-Shot Video Creation

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing text-driven multi-shot video generation methods, which suffer from imprecise camera control and reliance on manually specified trajectories—leading to high costs and frequent failures. To overcome these challenges, we propose a “plan-and-control” framework that first leverages a vision-language model (VLM) to plan a globally consistent camera trajectory and then employs a camera adapter to generate multi-shot videos. We construct a well-aligned dataset of (caption, trajectory, video) triplets, design an automated multi-shot camera calibration pipeline with a unified coordinate system, and introduce a three-track evaluation protocol. Experiments on ShotVerse-Bench demonstrate that our approach can automatically generate multi-shot videos exhibiting cinematic aesthetics, accurate camera trajectories, and strong cross-shot consistency.

Technology Category

Application Category

📝 Abstract
Text-driven video generation has democratized film creation, but camera control in cinematic multi-shot scenarios remains a significant block. Implicit textual prompts lack precision, while explicit trajectory conditioning imposes prohibitive manual overhead and often triggers execution failures in current models. To overcome this bottleneck, we propose a data-centric paradigm shift, positing that aligned (Caption, Trajectory, Video) triplets form an inherent joint distribution that can connect automated plotting and precise execution. Guided by this insight, we present ShotVerse, a "Plan-then-Control" framework that decouples generation into two collaborative agents: a VLM (Vision-Language Model)-based Planner that leverages spatial priors to obtain cinematic, globally aligned trajectories from text, and a Controller that renders these trajectories into multi-shot video content via a camera adapter. Central to our approach is the construction of a data foundation: we design an automated multi-shot camera calibration pipeline aligns disjoint single-shot trajectories into a unified global coordinate system. This facilitates the curation of ShotVerse-Bench, a high-fidelity cinematic dataset with a three-track evaluation protocol that serves as the bedrock for our framework. Extensive experiments demonstrate that ShotVerse effectively bridges the gap between unreliable textual control and labor-intensive manual plotting, achieving superior cinematic aesthetics and generating multi-shot videos that are both camera-accurate and cross-shot consistent.
Problem

Research questions and friction points this paper is trying to address.

camera control
text-driven video generation
multi-shot video
cinematic aesthetics
trajectory conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

camera control
text-to-video
multi-shot generation
trajectory planning
data-centric learning
🔎 Similar Papers
No similar papers found.
S
Songlin Yang
MMLab@HKUST, The Hong Kong University of Science and Technology
Zhe Wang
Zhe Wang
The Hong Kong University of Science and Technology
Atmospheric chemistryHeterogeneous ChemistrySOA formationCloud-aerosol-gas interaction
X
Xuyi Yang
MMLab@HKUST, The Hong Kong University of Science and Technology
Songchun Zhang
Songchun Zhang
The Hongkong University of Science and Technology
Generative AI
X
Xianghao Kong
MMLab@HKUST, The Hong Kong University of Science and Technology
T
Taiyi Wu
Tencent Video AI Center, PCG, Tencent
X
Xiaotong Zhao
Tencent Video AI Center, PCG, Tencent
Ran Zhang
Ran Zhang
Tencent America
Computer Graphics3D Vision3D UnderstandingNeural RenderingGenerative AI
A
Alan Zhao
Tencent Video AI Center, PCG, Tencent
Anyi Rao
Anyi Rao
Assistant Professor, HKUST
Human AIAI for CreativityGenerative AIContent CreationFilm