SPC to 3D: Novel View Synthesis from Binary SPC via I2I translation

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional 3D reconstruction fails on binary images from single-photon cameras (SPCs) due to severe loss of texture and color information. To address this, we propose a decoupled two-stage end-to-end framework: first, Pix2PixHD translates SPC binary images into high-fidelity RGB reconstructions; second, the resulting RGB images drive geometrically consistent novel-view synthesis via NeRF or 3D Gaussian Splatting (3DGS). Our approach is the first to explicitly separate image restoration from radiance field modeling—preserving geometric accuracy while substantially improving radiometric consistency. Quantitative and qualitative evaluations demonstrate that our method significantly outperforms baseline approaches using binary inputs directly, achieving superior perceptual quality and structural fidelity in novel-view synthesis. This work overcomes a critical bottleneck in 3D radiance field reconstruction from binary imaging.

Technology Category

Application Category

📝 Abstract
Single Photon Avalanche Diodes (SPADs) represent a cutting-edge imaging technology, capable of detecting individual photons with remarkable timing precision. Building on this sensitivity, Single Photon Cameras (SPCs) enable image capture at exceptionally high speeds under both low and high illumination. Enabling 3D reconstruction and radiance field recovery from such SPC data holds significant promise. However, the binary nature of SPC images leads to severe information loss, particularly in texture and color, making traditional 3D synthesis techniques ineffective. To address this challenge, we propose a modular two-stage framework that converts binary SPC images into high-quality colorized novel views. The first stage performs image-to-image (I2I) translation using generative models such as Pix2PixHD, converting binary SPC inputs into plausible RGB representations. The second stage employs 3D scene reconstruction techniques like Neural Radiance Fields (NeRF) or Gaussian Splatting (3DGS) to generate novel views. We validate our two-stage pipeline (Pix2PixHD + Nerf/3DGS) through extensive qualitative and quantitative experiments, demonstrating significant improvements in perceptual quality and geometric consistency over the alternative baseline.
Problem

Research questions and friction points this paper is trying to address.

Convert binary SPC images to colorized 3D views
Address information loss in texture and color
Improve 3D reconstruction from single-photon camera data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image-to-image translation for binary SPC data
Generative models like Pix2PixHD for RGB conversion
NeRF or 3DGS for 3D scene reconstruction
🔎 Similar Papers
No similar papers found.