Event-based Motion-Robust Accurate Shape Estimation for Mixed Reflectance Scenes

📅 2023-11-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing structured-light systems—whether event-based or frame-based—struggle to simultaneously handle diffuse and specular surfaces, failing to achieve high-speed, high-accuracy 3D reconstruction in scenes with mixed reflectance. This paper introduces the first event-driven structured-light 3D imaging system specifically designed for mixed-reflectance scenarios. We propose a novel hierarchical reflection decomposition framework that unifies triangulation and deflectometry, treating the entire scene as a virtual screen to jointly model and decouple diffuse, single-bounce, and multi-bounce specular reflections. Leveraging synchronized sensing between an event camera and a scanning laser—guided by epipolar constraints and reflection-component separation—the system achieves 14 Hz reconstruction rate with sub-500 μm accuracy in mixed-reflectance scenes, and supports ultra-fast 250 Hz operation on purely diffuse surfaces. To our knowledge, this is the first method enabling motion-robust, reflectance-agnostic, real-time, high-precision 3D shape estimation.
📝 Abstract
Event-based structured light systems have recently been introduced as an exciting alternative to conventional frame-based triangulation systems for the 3D measurements of diffuse surfaces. Important benefits include the fast capture speed and the high dynamic range provided by the event camera - albeit at the cost of lower data quality. So far, both low-accuracy event-based as well as high-accuracy frame-based 3D imaging systems are tailored to a specific surface type, such as diffuse or specular, and can not be used for a broader class of object surfaces ("mixed reflectance scenes"). In this paper, we present a novel event-based structured light system that enables fast 3D imaging of mixed reflectance scenes with high accuracy. On the captured events, we use epipolar constraints that intrinsically enable decomposing the measured reflections into diffuse, two-bounce specular, and other multi-bounce reflections. The diffuse objects in the scene are reconstructed using triangulation. Eventually, the reconstructed diffuse scene parts are used as a"display"to evaluate the specular scene parts via deflectometry. This novel procedure allows us to use the entire scene as a virtual screen, using only a scanning laser and an event camera. The resulting system achieves fast and motion-robust (14Hz) reconstructions of mixed reflectance scenes with<500 $mu$m accuracy. Moreover, we introduce a"superfast"capture mode (250Hz) for the 3D measurement of diffuse scenes.
Problem

Research questions and friction points this paper is trying to address.

Accurate 3D imaging for mixed reflectance scenes
Motion-robust shape estimation using event-based systems
Decomposing reflections into diffuse and specular components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-based structured light for mixed reflectance
Epipolar constraints decompose reflection types
Diffuse parts as virtual screen for deflectometry
🔎 Similar Papers
No similar papers found.