Recovering Parametric Scenes from Very Few Time-of-Flight Pixels

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the depth-sensing bottleneck of low-cost, ultra-low-resolution (≈15 pixels) time-of-flight (ToF) sensors. We propose an analysis-by-synthesis framework for robust reconstruction of parametric 3D scene geometry and pose. Our method integrates a feedforward neural network for initial pose estimation with differentiable rendering–driven end-to-end optimization, leveraging time-resolved photon-counting measurements to achieve high-fidelity depth inversion from sparse observations under wide field-of-view conditions. Evaluated on both synthetic and real-world textureless objects, the approach accurately estimates the full 6D pose of known parametric models, demonstrates generalization across diverse parametric scene classes, and systematically characterizes the fundamental imaging limits of this sparse sensing paradigm. The core contribution is the first demonstration of precise parametric 3D understanding from ultra-low-sampling ToF data—effectively bypassing hardware resolution constraints through algorithmic innovation.

Technology Category

Application Category

📝 Abstract
We aim to recover the geometry of 3D parametric scenes using very few depth measurements from low-cost, commercially available time-of-flight sensors. These sensors offer very low spatial resolution (i.e., a single pixel), but image a wide field-of-view per pixel and capture detailed time-of-flight data in the form of time-resolved photon counts. This time-of-flight data encodes rich scene information and thus enables recovery of simple scenes from sparse measurements. We investigate the feasibility of using a distributed set of few measurements (e.g., as few as 15 pixels) to recover the geometry of simple parametric scenes with a strong prior, such as estimating the 6D pose of a known object. To achieve this, we design a method that utilizes both feed-forward prediction to infer scene parameters, and differentiable rendering within an analysis-by-synthesis framework to refine the scene parameter estimate. We develop hardware prototypes and demonstrate that our method effectively recovers object pose given an untextured 3D model in both simulations and controlled real-world captures, and show promising initial results for other parametric scenes. We additionally conduct experiments to explore the limits and capabilities of our imaging solution.
Problem

Research questions and friction points this paper is trying to address.

Recovering 3D geometry from sparse depth measurements
Estimating 6D object pose using few ToF pixels
Developing differentiable rendering for parametric scene reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-of-flight photon counts data
Feed-forward prediction and differentiable rendering
Analysis-by-synthesis framework refinement
🔎 Similar Papers
No similar papers found.