🤖 AI Summary
Current vision-language models (VLMs) lack systematic evaluation in understanding cinematic shot grammar—including composition, camera motion, lighting, and other filmic conventions—hindering their application in fine-grained video understanding and generation. To address this, we introduce ShotBench, the first multimodal benchmark dedicated to cinematic shot language comprehension, comprising 3.5K expert-annotated question-answer pairs spanning eight core film studies dimensions. We further release ShotQA, a large-scale training dataset with 70K QA pairs, and propose ShotVL, a novel VLM architecture that integrates supervised fine-tuning with Groupwise Relative Policy Optimization (GRPO) to enhance shot-level semantic reasoning. Experiments demonstrate that ShotVL achieves new state-of-the-art performance on ShotBench, significantly outperforming all existing open- and closed-source VLMs. All models, datasets, and code are publicly released to advance research in AI-driven film analysis and generative cinematography.
📝 Abstract
Cinematography, the fundamental visual language of film, is essential for conveying narrative, emotion, and aesthetic quality. While recent Vision-Language Models (VLMs) demonstrate strong general visual understanding, their proficiency in comprehending the nuanced cinematic grammar embedded within individual shots remains largely unexplored and lacks robust evaluation. This critical gap limits both fine-grained visual comprehension and the precision of AI-assisted video generation. To address this, we introduce extbf{ShotBench}, a comprehensive benchmark specifically designed for cinematic language understanding. It features over 3.5k expert-annotated QA pairs from images and video clips, meticulously curated from over 200 acclaimed (predominantly Oscar-nominated) films and spanning eight key cinematography dimensions. Our evaluation of 24 leading VLMs on ShotBench reveals their substantial limitations: even the top-performing model achieves less than 60% average accuracy, particularly struggling with fine-grained visual cues and complex spatial reasoning. To catalyze advancement in this domain, we construct extbf{ShotQA}, a large-scale multimodal dataset comprising approximately 70k cinematic QA pairs. Leveraging ShotQA, we develop extbf{ShotVL} through supervised fine-tuning and Group Relative Policy Optimization. ShotVL significantly outperforms all existing open-source and proprietary models on ShotBench, establishing new extbf{state-of-the-art} performance. We open-source our models, data, and code to foster rapid progress in this crucial area of AI-driven cinematic understanding and generation.