🤖 AI Summary
This study investigates whether video generation models implicitly encode 3D spatial awareness—specifically, whether their internal representations support high-accuracy camera pose estimation.
Method: Benchmarking against structure-from-motion (SfM), we systematically evaluate the pose estimation capability of intermediate features from OpenSora and propose JOGR, a unified architecture that jointly optimizes video generation and pose estimation via task-driven lightweight fine-tuning.
Contribution/Results: We demonstrate for the first time that appropriately adapted intermediate representations of video generative models can achieve high-precision pose estimation. JOGR significantly reduces absolute and relative pose errors (APE/RPE) without degrading generation quality—as measured by Fréchet Video Distance (FVD)—and attains accuracy comparable to dedicated SfM methods. Across multiple benchmarks, JOGR achieves a Pareto-optimal balance between visual fidelity and geometric accuracy, establishing a new paradigm for synergistic learning of generation and 3D understanding.
📝 Abstract
Inspired by the emergent 3D capabilities in image generators, we explore whether video generators similarly exhibit 3D awareness. Using structure-from-motion (SfM) as a benchmark for 3D tasks, we investigate if intermediate features from OpenSora, a video generation model, can support camera pose estimation. We first examine native 3D awareness in video generation features by routing raw intermediate outputs to SfM-prediction modules like DUSt3R. Then, we explore the impact of fine-tuning on camera pose estimation to enhance 3D awareness. Results indicate that while video generator features have limited inherent 3D awareness, task-specific supervision significantly boosts their accuracy for camera pose estimation, resulting in competitive performance. The proposed unified model, named JOG3R, produces camera pose estimates with competitive quality without degrading video generation quality.