Robust Single-shot Structured Light 3D Imaging via Neural Feature Decoding

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor robustness of single-shot structured light 3D imaging under occlusion, non-Lambertian surfaces, and fine geometric structures, this paper proposes an end-to-end decoding framework based on neural feature-space matching, abandoning noise-sensitive pixel-domain correspondence. Our key contributions are: (1) the first integration of neural feature matching into structured light decoding, enabling construction of a geometry-aware cost volume in latent feature space; (2) incorporation of a large-model-driven monocular depth prior for depth refinement; and (3) a lightweight neural encoder trained exclusively on over one million physically rendered synthetic samples. The method demonstrates strong generalization to real indoor scenes, supports diverse structured light patterns, and consistently outperforms state-of-the-art approaches—including Apple’s Face ID, Intel RealSense, and RGB stereo matching—in both accuracy and robustness.

Technology Category

Application Category

📝 Abstract
We consider the problem of active 3D imaging using single-shot structured light systems, which are widely employed in commercial 3D sensing devices such as Apple Face ID and Intel RealSense. Traditional structured light methods typically decode depth correspondences through pixel-domain matching algorithms, resulting in limited robustness under challenging scenarios like occlusions, fine-structured details, and non-Lambertian surfaces. Inspired by recent advances in neural feature matching, we propose a learning-based structured light decoding framework that performs robust correspondence matching within feature space rather than the fragile pixel domain. Our method extracts neural features from the projected patterns and captured infrared (IR) images, explicitly incorporating their geometric priors by building cost volumes in feature space, achieving substantial performance improvements over pixel-domain decoding approaches. To further enhance depth quality, we introduce a depth refinement module that leverages strong priors from large-scale monocular depth estimation models, improving fine detail recovery and global structural coherence. To facilitate effective learning, we develop a physically-based structured light rendering pipeline, generating nearly one million synthetic pattern-image pairs with diverse objects and materials for indoor settings. Experiments demonstrate that our method, trained exclusively on synthetic data with multiple structured light patterns, generalizes well to real-world indoor environments, effectively processes various pattern types without retraining, and consistently outperforms both commercial structured light systems and passive stereo RGB-based depth estimation methods. Project page: https://namisntimpot.github.io/NSLweb/.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D imaging robustness in single-shot structured light systems.
Replaces pixel-domain matching with neural feature decoding for better performance.
Improves depth quality using monocular depth priors and synthetic training data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural feature matching replaces pixel-domain decoding
Feature space cost volumes incorporate geometric priors
Monocular depth priors refine details in refinement module
🔎 Similar Papers
No similar papers found.
J
Jiaheng Li
Wangxuan Institute of Computer Technology, Peking University, China
Qiyu Dai
Qiyu Dai
Peking University
Computer VisionComputer GraphicsDeep Learning
L
Lihan Li
Yuanpei College, Peking University, China
P
Praneeth Chakravarthula
University of North Carolina at Chapel Hill, The United States
H
He Sun
College of Future Technology, Peking University, China
Baoquan Chen
Baoquan Chen
Peking University, IEEE Fellow
computer graphicscomputer visionvisualizationmultimediahuman computer interaction
Wenzheng Chen
Wenzheng Chen
Peking University
Computational Photography3D Vision