Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reconstructing 3D scenes containing occluded regions and specular objects (e.g., mirrors) from a single-photon LiDAR measurement. Conventional methods fail to resolve multi-path reflections—particularly second-bounce returns—leading to geometric reconstruction failures. We propose the first learning-based framework for inverting light transport: enabling end-to-end separation and reconstruction of second-bounce signals under multi-point synchronized illumination. By tightly integrating physics-informed modeling with deep learning, our method jointly estimates dense depth maps and hidden geometry. We further introduce LiTraS, the first large-scale indoor LiDAR transient simulation dataset. Experiments demonstrate that our approach achieves high-fidelity recovery of occluded structures and scenes behind mirrors from a single scan, significantly advancing non-line-of-sight imaging capabilities in complex scattering media. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
3D scene reconstruction from a single measurement is challenging, especially in the presence of occluded regions and specular materials, such as mirrors. We address these challenges by leveraging single-photon lidars. These lidars estimate depth from light that is emitted into the scene and reflected directly back to the sensor. However, they can also measure light that bounces multiple times in the scene before reaching the sensor. This multi-bounce light contains additional information that can be used to recover dense depth, occluded geometry, and material properties. Prior work with single-photon lidar, however, has only demonstrated these use cases when a laser sequentially illuminates one scene point at a time. We instead focus on the more practical - and challenging - scenario of illuminating multiple scene points simultaneously. The complexity of light transport due to the combined effects of multiplexed illumination, two-bounce light, shadows, and specular reflections is challenging to invert analytically. Instead, we propose a data-driven method to invert light transport in single-photon lidar. To enable this approach, we create the first large-scale simulated dataset of ~100k lidar transients for indoor scenes. We use this dataset to learn a prior on complex light transport, enabling measured two-bounce light to be decomposed into the constituent contributions from each laser spot. Finally, we experimentally demonstrate how this decomposed light can be used to infer 3D geometry in scenes with occlusions and mirrors from a single measurement. Our code and dataset are released at https://shoot-bounce-3d.github.io.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs 3D scenes from single lidar measurements with occlusions and mirrors
Decomposes two-bounce light to recover dense depth and hidden geometry
Uses data-driven learning to invert complex light transport from multiplexed illumination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using single-photon lidar to capture multi-bounce light for 3D reconstruction
Creating a large-scale simulated dataset to learn light transport priors
Decomposing two-bounce light to infer occluded geometry and specular surfaces
🔎 Similar Papers
No similar papers found.