Geometry-Guided Camera Motion Understanding in VideoLLMs

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited ability of existing Video Large Language Models (VideoLLMs) to accurately interpret fine-grained camera motion, stemming from their lack of explicit modeling of critical geometric cues. To this end, we present the first systematic benchmark for camera motion understanding, comprising a large-scale synthetic dataset, CameraMotionDataset, and a vision-language question-answering benchmark, CameraMotionVQA, which reveals the weak representation of motion cues in current visual encoders. We further propose a lightweight, training-free, and model-agnostic geometric guidance injection framework: leveraging a 3D foundation model to extract geometric signals, a temporal classifier predicts constrained motion primitives, which are then integrated into downstream reasoning via structured prompting. This approach significantly enhances both the accuracy of camera motion recognition and the camera-awareness of VideoLLM responses.

Technology Category

Application Category

📝 Abstract
Camera motion is a fundamental geometric signal that shapes visual perception and cinematic style, yet current video-capable vision-language models (VideoLLMs) rarely represent it explicitly and often fail on fine-grained motion primitives. We address this gap with a framework of $\textbf{benchmarking}$, $\textbf{diagnosis}$, and $\textbf{injection}$. We curate $\textbf{CameraMotionDataset}$, a large-scale synthetic dataset with explicit camera control, formulate camera motion as constraint-aware multi-label recognition, and construct a VQA benchmark--$\textbf{CameraMotionVQA}$. Across diverse off-the-shelf VideoLLMs, we observe substantial errors in recognizing camera motion primitives. Probing experiments on a Qwen2.5-VL vision encoder suggest that camera motion cues are weakly represented, especially in deeper ViT blocks, helping explain the observed failure modes. To bridge this gap without costly training or fine-tuning, we propose a lightweight, model-agnostic pipeline that extracts geometric camera cues from 3D foundation models (3DFMs), predicts constrained motion primitives with a temporal classifier, and injects them into downstream VideoLLM inference via structured prompting. Experiments demonstrate improved motion recognition and more camera-aware model responses, highlighting geometry-driven cue extraction and structured prompting as practical steps toward a camera-aware VideoLLM and VLA system. The dataset and benchmark is publicly available at https://hf.co/datasets/fengyee/camera-motion-dataset-and-benchmark.
Problem

Research questions and friction points this paper is trying to address.

camera motion
VideoLLMs
visual perception
motion primitives
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

camera motion understanding
geometry-guided prompting
3D foundation models
structured prompting
VideoLLM benchmarking
🔎 Similar Papers
H
Haoan Feng
University of Maryland, College Park
S
Sri Harsha Musunuri
Dolby Laboratories Inc.
Guan-Ming Su
Guan-Ming Su
Dolby Labs
multimedia signal processingmultimedia communications