Neural Visibility of Point Sets

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of visibility determination for 3D point clouds from a given viewpoint. We propose the first end-to-end deep learning-based binary classification method, overcoming key limitations of traditional geometric approaches—such as Hidden Point Removal (HPR)—including low computational efficiency, sensitivity to noise, poor handling of concave regions, and degradation on low-density point clouds. Our method employs a 3D U-Net to extract view-invariant, point-wise features, fuses them with viewpoint-direction encoding, and applies a shared MLP to predict per-point visibility. Ground-truth visibility labels are generated via differentiable rendering. Extensive evaluation on ShapeNet, the ABC Dataset, and real-world scans demonstrates substantial improvements over HPR—up to 126× faster inference—while exhibiting strong generalization and robustness to noise. Moreover, our visibility predictions significantly enhance downstream tasks including point cloud visualization, surface reconstruction, and normal estimation.

Technology Category

Application Category

📝 Abstract
Point clouds are widely used representations of 3D data, but determining the visibility of points from a given viewpoint remains a challenging problem due to their sparse nature and lack of explicit connectivity. Traditional methods, such as Hidden Point Removal (HPR), face limitations in computational efficiency, robustness to noise, and handling concave regions or low-density point clouds. In this paper, we propose a novel approach to visibility determination in point clouds by formulating it as a binary classification task. The core of our network consists of a 3D U-Net that extracts view-independent point-wise features and a shared multi-layer perceptron (MLP) that predicts point visibility using the extracted features and view direction as inputs. The network is trained end-to-end with ground-truth visibility labels generated from rendered 3D models. Our method significantly outperforms HPR in both accuracy and computational efficiency, achieving up to 126 times speedup on large point clouds. Additionally, our network demonstrates robustness to noise and varying point cloud densities and generalizes well to unseen shapes. We validate the effectiveness of our approach through extensive experiments on the ShapeNet, ABC Dataset and real-world datasets, showing substantial improvements in visibility accuracy. We also demonstrate the versatility of our method in various applications, including point cloud visualization, surface reconstruction, normal estimation, shadow rendering, and viewpoint optimization. Our code and models are available at https://github.com/octree-nn/neural-visibility.
Problem

Research questions and friction points this paper is trying to address.

Determining point visibility in sparse 3D point clouds
Overcoming limitations of traditional Hidden Point Removal methods
Handling noise and low-density regions in visibility classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D U-Net for view-independent feature extraction
Combines features with view direction via shared MLP
Treats visibility as end-to-end binary classification task
🔎 Similar Papers
No similar papers found.
J
Jun-Hao Wang
Wangxuan Institute of Computer Technology, Peking University, China
Y
Yi-Yang Tian
Wangxuan Institute of Computer Technology, Peking University, China
Baoquan Chen
Baoquan Chen
Peking University, IEEE Fellow
computer graphicscomputer visionvisualizationmultimediahuman computer interaction
Peng-Shuai Wang
Peng-Shuai Wang
Assistant Professor, Peking University
Geometry processing3D Deep LearningComputer graphics