AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 7
Influential: 1
📄 PDF
🤖 AI Summary
Existing text-to-video models suffer from low 3D camera control accuracy and degraded generation quality. Method: Grounded in first principles, we identify camera motion as a low-frequency signal and discover that unconditional video diffusion models implicitly possess camera pose estimation capability. Leveraging this insight, we propose a hierarchical sparse camera-condition injection mechanism and construct a 20K dynamic-static video dataset to decouple camera motion from scene dynamics. Our approach builds upon a video diffusion Transformer, integrating low-frequency modeling, pose-aware representation learning, and knowledge distillation. Contribution/Results: Experiments demonstrate a 4× reduction in trainable parameters, accelerated training, a 10% improvement in visual quality, more natural motion synthesis, and new state-of-the-art camera control accuracy.

Technology Category

Application Category

📝 Abstract
Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-portion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the architecture to prevent interference with other video features, leading to a 4x reduction of training parameters, improved training speed, and 10% higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras. This helps the model distinguish between camera and scene motion and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control.
Problem

Research questions and friction points this paper is trying to address.

Improving imprecise 3D camera control in video diffusion models
Optimizing camera conditioning to enhance video generation quality
Distinguishing camera and scene motion for better dynamic videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adjust pose conditioning schedules for better quality
Limit camera conditioning to subset of layers
Use curated dataset to distinguish motion types