🤖 AI Summary
Existing camera encoding methods rely heavily on the pinhole camera assumption, limiting their generalizability to real-world cameras with complex intrinsic parameters and lens distortions.
Method: We propose Unified Camera Positional Encoding (UCPE), the first method to jointly model full geometric information—including 6-DoF pose, intrinsics, radial/tangential distortion, pitch, and roll—via relative ray encoding for light-path characterization and absolute direction encoding for global orientation. UCPE introduces <1% additional trainable parameters and is integrated into a pre-trained video diffusion Transformer with a lightweight spatial attention adapter, trained on a large-scale, in-house dataset covering diverse camera motions and lens types.
Contribution/Results: Our approach achieves state-of-the-art performance in camera-controllable video generation, significantly improving visual fidelity and geometric consistency. It demonstrates strong generalization potential across multi-view, video, and 3D tasks, establishing UCPE as a versatile, geometry-aware camera representation.
📝 Abstract
Transformers have emerged as a universal backbone across 3D perception, video generation, and world models for autonomous driving and embodied AI, where understanding camera geometry is essential for grounding visual observations in three-dimensional space. However, existing camera encoding methods often rely on simplified pinhole assumptions, restricting generalization across the diverse intrinsics and lens distortions in real-world cameras. We introduce Relative Ray Encoding, a geometry-consistent representation that unifies complete camera information, including 6-DoF poses, intrinsics, and lens distortions. To evaluate its capability under diverse controllability demands, we adopt camera-controlled text-to-video generation as a testbed task. Within this setting, we further identify pitch and roll as two components effective for Absolute Orientation Encoding, enabling full control over the initial camera orientation. Together, these designs form UCPE (Unified Camera Positional Encoding), which integrates into a pretrained video Diffusion Transformer through a lightweight spatial attention adapter, adding less than 1% trainable parameters while achieving state-of-the-art camera controllability and visual fidelity. To facilitate systematic training and evaluation, we construct a large video dataset covering a wide range of camera motions and lens types. Extensive experiments validate the effectiveness of UCPE in camera-controllable video generation and highlight its potential as a general camera representation for Transformers across future multi-view, video, and 3D tasks. Code will be available at https://github.com/chengzhag/UCPE.