🤖 AI Summary
Precise camera control in video generation is hindered by reliance on labor-intensive, manually annotated camera poses—scarce at scale and inconsistent with depth estimation, causing train-test misalignment. To address this, we propose a geometry-aware unified tokenization framework that requires no camera annotations: it jointly estimates depth and camera parameters using foundation models (e.g., VGGT), incorporates a lightweight contextual block embedding into a pre-trained video diffusion model, and employs a two-stage progressive curriculum training strategy. The method natively supports multimodal spatial controls—including layout guidance and inpainting. Evaluated across multiple benchmarks, it achieves state-of-the-art performance, significantly improving geometric consistency, temporal stability, and visual fidelity. Moreover, it exhibits strong generalization and plug-and-play adaptability to diverse downstream tasks without architectural modification.
📝 Abstract
Achieving precise camera control in video generation remains challenging, as existing methods often rely on camera pose annotations that are difficult to scale to large and dynamic datasets and are frequently inconsistent with depth estimation, leading to train-test discrepancies. We introduce CETCAM, a camera-controllable video generation framework that eliminates the need for camera annotations through a consistent and extensible tokenization scheme. CETCAM leverages recent advances in geometry foundation models, such as VGGT, to estimate depth and camera parameters and converts them into unified, geometry-aware tokens. These tokens are seamlessly integrated into a pretrained video diffusion backbone via lightweight context blocks. Trained in two progressive stages, CETCAM first learns robust camera controllability from diverse raw video data and then refines fine-grained visual quality using curated high-fidelity datasets. Extensive experiments across multiple benchmarks demonstrate state-of-the-art geometric consistency, temporal stability, and visual realism. Moreover, CETCAM exhibits strong adaptability to additional control modalities, including inpainting and layout control, highlighting its flexibility beyond camera control. The project page is available at https://sjtuytc.github.io/CETCam_project_page.github.io/.