Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of detecting occluded obstacles and ensuring robust collision avoidance in autonomous navigation under low-visibility conditions, this paper proposes the first closed-loop autonomous navigation framework based on laboratory-grade non-line-of-sight (NLOS) imaging. Methodologically, we employ a single-photon avalanche diode (SPAD) LiDAR to acquire multi-bounce photon time-of-flight histograms, develop a dynamics-integrated transient rendering simulator, and design a lightweight convolutional network for real-time hidden-region occupancy mapping and path re-planning. Key contributions include: (1) the first demonstration of NLOS-driven “corner avoidance” by a mobile robot in a realistic L-shaped corridor; (2) a high-fidelity transient simulation framework enabling reproducible validation of NLOS perception systems; and (3) experimental results showing that NLOS perception significantly enhances path planning safety—reducing collision risk by 83% compared to conventional line-of-sight (LOS) approaches.

Technology Category

Application Category

📝 Abstract
Robust autonomous navigation in environments with limited visibility remains a critical challenge in robotics. We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation. Our method enables mobile robots to"see around corners"by utilizing multi-bounce light information, effectively expanding their perceptual range without additional infrastructure. We propose a three-module pipeline: (1) Sensing, which captures multi-bounce histograms using SPAD-based LiDAR; (2) Perception, which estimates occupancy maps of hidden regions from these histograms using a convolutional neural network; and (3) Control, which allows a robot to follow safe paths based on the estimated occupancy. We evaluate our approach through simulations and real-world experiments on a mobile robot navigating an L-shaped corridor with hidden obstacles. Our work represents the first experimental demonstration of NLOS imaging for autonomous navigation, paving the way for safer and more efficient robotic systems operating in complex environments. We also contribute a novel dynamics-integrated transient rendering framework for simulating NLOS scenarios, facilitating future research in this domain.
Problem

Research questions and friction points this paper is trying to address.

Improving visibility for autonomous navigation in low-visibility environments.
Enabling robots to detect hidden objects using single-photon LiDAR.
Developing a three-module pipeline for sensing, perception, and control.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-photon LiDAR for NLOS sensing
Convolutional neural network for occupancy estimation
Dynamics-integrated transient rendering framework
🔎 Similar Papers
No similar papers found.