VDNeRF: Vision-only Dynamic Neural Radiance Field for Urban Scenes

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling neural radiance fields (NeRFs) in dynamic urban scenes without accurate camera pose priors remains challenging. Method: This paper proposes a dual-NeRF collaborative framework that achieves self-supervised static-dynamic decomposition and robust camera pose estimation from monocular video alone. A static NeRF jointly optimizes background geometry and camera trajectory, while a dynamic NeRF explicitly models moving objects using 3D scene flow. The two components are trained end-to-end in a fully self-supervised manner—requiring no IMU, GPS, or other external sensors—and inherently disentangle ego-motion from independent object motion. Contribution/Results: Evaluated on standard urban-scene benchmarks, our method significantly outperforms existing pose-free NeRF approaches in both camera pose accuracy and dynamic novel-view synthesis quality. It establishes a scalable, vision-only foundation for dynamic scene reconstruction, with direct implications for autonomous driving and robotic perception.

Technology Category

Application Category

📝 Abstract
Neural Radiance Fields (NeRFs) implicitly model continuous three-dimensional scenes using a set of images with known camera poses, enabling the rendering of photorealistic novel views. However, existing NeRF-based methods encounter challenges in applications such as autonomous driving and robotic perception, primarily due to the difficulty of capturing accurate camera poses and limitations in handling large-scale dynamic environments. To address these issues, we propose Vision-only Dynamic NeRF (VDNeRF), a method that accurately recovers camera trajectories and learns spatiotemporal representations for dynamic urban scenes without requiring additional camera pose information or expensive sensor data. VDNeRF employs two separate NeRF models to jointly reconstruct the scene. The static NeRF model optimizes camera poses and static background, while the dynamic NeRF model incorporates the 3D scene flow to ensure accurate and consistent reconstruction of dynamic objects. To address the ambiguity between camera motion and independent object motion, we design an effective and powerful training framework to achieve robust camera pose estimation and self-supervised decomposition of static and dynamic elements in a scene. Extensive evaluations on mainstream urban driving datasets demonstrate that VDNeRF surpasses state-of-the-art NeRF-based pose-free methods in both camera pose estimation and dynamic novel view synthesis.
Problem

Research questions and friction points this paper is trying to address.

Recovers camera trajectories without pose data for dynamic urban scenes
Handles large-scale dynamic environments using separate static and dynamic NeRFs
Resolves ambiguity between camera motion and independent object motion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recovers camera trajectories without pose data
Employs dual NeRF models for static and dynamic reconstruction
Uses self-supervised decomposition of scene elements
🔎 Similar Papers
No similar papers found.
Z
Zhengyu Zou
Northwestern Polytechnical University, Xi’an, China
J
Jingfeng Li
Northwestern Polytechnical University, Xi’an, China
H
Hao Li
Northwestern Polytechnical University, Xi’an, China
X
Xiaolei Hou
Northwestern Polytechnical University, Xi’an, China
J
Jinwen Hu
Northwestern Polytechnical University, Xi’an, China
Jingkun Chen
Jingkun Chen
University of Oxford
Medical image analysisComputer visionMachine learning
Lechao Cheng
Lechao Cheng
Associate Professor, Hefei University of Technology
Imbalanced LearningDistillationNoisy Label LearningWeakly Supervised LearningVisual Tuning
D
Dingwen Zhang
Northwestern Polytechnical University, Xi’an, China