🤖 AI Summary
Monocular 6-DoF robot localization suffers from low accuracy and poor robustness, primarily due to sparse visual place recognition (VPR) data and rendering artifacts in neural radiance fields (NeRF), which induce significant pose uncertainty. To address this, we propose a guided SE(3) particle filtering framework that tightly integrates VPR outputs into the particle filter recursion. We introduce an adaptive NeRF-rendering-based nudging mechanism, leveraging SE(3) anchor poses to dynamically guide particle sampling and reweighting—thereby unifying global coarse localization with local fine-grained tracking. Our method synergistically combines NeRF-derived geometric priors, Lie-group motion modeling, and VPR-provided semantic cues. Evaluated on real-world scenes, it achieves substantially improved convergence speed and localization accuracy, reducing average rotational and translational errors by 32.7% and 28.4%, respectively, while enabling real-time, robust monocular visual localization.
📝 Abstract
Can we localize a robot in radiance fields only using monocular vision? This study presents NuRF, a nudged particle filter framework for 6-DoF robot visual localization in radiance fields. NuRF sets anchors in SE(3) to leverage visual place recognition, which provides image comparisons to guide the sampling process. This guidance could improve the convergence and robustness of particle filters for robot localization. Additionally, an adaptive scheme is designed to enhance the performance of NuRF, thus enabling both global visual localization and local pose tracking. Real-world experiments are conducted with comprehensive tests to demonstrate the effectiveness of NuRF. The results showcase the advantages of NuRF in terms of accuracy and efficiency, including comparisons with alternative approaches. Furthermore, we report our findings for future studies and advancements in robot navigation in radiance fields.