🤖 AI Summary
Modeling neural radiance fields (NeRFs) in dynamic urban scenes without accurate camera pose priors remains challenging. Method: This paper proposes a dual-NeRF collaborative framework that achieves self-supervised static-dynamic decomposition and robust camera pose estimation from monocular video alone. A static NeRF jointly optimizes background geometry and camera trajectory, while a dynamic NeRF explicitly models moving objects using 3D scene flow. The two components are trained end-to-end in a fully self-supervised manner—requiring no IMU, GPS, or other external sensors—and inherently disentangle ego-motion from independent object motion. Contribution/Results: Evaluated on standard urban-scene benchmarks, our method significantly outperforms existing pose-free NeRF approaches in both camera pose accuracy and dynamic novel-view synthesis quality. It establishes a scalable, vision-only foundation for dynamic scene reconstruction, with direct implications for autonomous driving and robotic perception.
📝 Abstract
Neural Radiance Fields (NeRFs) implicitly model continuous three-dimensional scenes using a set of images with known camera poses, enabling the rendering of photorealistic novel views. However, existing NeRF-based methods encounter challenges in applications such as autonomous driving and robotic perception, primarily due to the difficulty of capturing accurate camera poses and limitations in handling large-scale dynamic environments. To address these issues, we propose Vision-only Dynamic NeRF (VDNeRF), a method that accurately recovers camera trajectories and learns spatiotemporal representations for dynamic urban scenes without requiring additional camera pose information or expensive sensor data. VDNeRF employs two separate NeRF models to jointly reconstruct the scene. The static NeRF model optimizes camera poses and static background, while the dynamic NeRF model incorporates the 3D scene flow to ensure accurate and consistent reconstruction of dynamic objects. To address the ambiguity between camera motion and independent object motion, we design an effective and powerful training framework to achieve robust camera pose estimation and self-supervised decomposition of static and dynamic elements in a scene. Extensive evaluations on mainstream urban driving datasets demonstrate that VDNeRF surpasses state-of-the-art NeRF-based pose-free methods in both camera pose estimation and dynamic novel view synthesis.