Through the Perspective of LiDAR: A Feature-Enriched and Uncertainty-Aware Annotation Pipeline for Terrestrial Point Cloud Segmentation

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost of manual annotation in terrestrial laser scanning (TLS) point cloud semantic segmentation, this paper proposes an uncertainty-aware semi-automatic labeling framework. It fuses spherical projection with multi-channel geometric and intensity features, leveraging an ensemble segmentation network to generate high-confidence pseudo-labels; an uncertainty map then actively guides human annotators toward critical regions. The framework integrates inverse projection and a three-tier visualization system—2D feature maps, 3D colorized point clouds, and compact virtual spheres—to enable efficient iterative labeling. With only 12 annotated scans, performance saturates at mIoU = 0.76; geometric features contribute most significantly, and a nine-channel feature combination achieves near-optimal accuracy. We release Mangrove3D—the first large-scale mangrove 3D point cloud dataset—and validate cross-dataset generalization on ForestSemantic and Semantic3D. Empirical analysis quantifies data efficiency and feature importance.

Technology Category

Application Category

📝 Abstract
Accurate semantic segmentation of terrestrial laser scanning (TLS) point clouds is limited by costly manual annotation. We propose a semi-automated, uncertainty-aware pipeline that integrates spherical projection, feature enrichment, ensemble learning, and targeted annotation to reduce labeling effort, while sustaining high accuracy. Our approach projects 3D points to a 2D spherical grid, enriches pixels with multi-source features, and trains an ensemble of segmentation networks to produce pseudo-labels and uncertainty maps, the latter guiding annotation of ambiguous regions. The 2D outputs are back-projected to 3D, yielding densely annotated point clouds supported by a three-tier visualization suite (2D feature maps, 3D colorized point clouds, and compact virtual spheres) for rapid triage and reviewer guidance. Using this pipeline, we build Mangrove3D, a semantic segmentation TLS dataset for mangrove forests. We further evaluate data efficiency and feature importance to address two key questions: (1) how much annotated data are needed and (2) which features matter most. Results show that performance saturates after ~12 annotated scans, geometric features contribute the most, and compact nine-channel stacks capture nearly all discriminative power, with the mean Intersection over Union (mIoU) plateauing at around 0.76. Finally, we confirm the generalization of our feature-enrichment strategy through cross-dataset tests on ForestSemantic and Semantic3D. Our contributions include: (i) a robust, uncertainty-aware TLS annotation pipeline with visualization tools; (ii) the Mangrove3D dataset; and (iii) empirical guidance on data efficiency and feature importance, thus enabling scalable, high-quality segmentation of TLS point clouds for ecological monitoring and beyond. The dataset and processing scripts are publicly available at https://fz-rit.github.io/through-the-lidars-eye/.
Problem

Research questions and friction points this paper is trying to address.

Reducing costly manual annotation for terrestrial point cloud segmentation
Developing semi-automated pipeline with uncertainty-aware labeling guidance
Establishing data efficiency benchmarks and feature importance analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-automated pipeline integrates spherical projection and feature enrichment
Ensemble learning produces pseudo-labels with uncertainty-guided annotation
Three-tier visualization supports rapid triage of annotated point clouds
🔎 Similar Papers
No similar papers found.
Fei Zhang
Fei Zhang
Shanghai Jiao Tong University
Machine LearningComputer Vision
R
Rob Chancia
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
J
Josie Clapp
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
A
Amirhossein Hassanzadeh
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
D
Dimah Dera
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
R
Richard MacKenzie
U.S. Forest Service, USA
J
Jan van Aardt
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA