Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unsupervised leaf-wood segmentation in high-density airborne multispectral LiDAR forest point clouds suffers from low accuracy, while supervised methods heavily rely on labor-intensive manual annotations. Method: We propose GrowSP-ForMS—the first unsupervised deep learning framework tailored for multispectral forest point clouds—integrating multispectral reflectance (NIR/Red/Green), geometric features, and self-supervised contrastive learning, augmented by a structured growth strategy to enhance clustering discriminability. Results: On a standardized multispectral benchmark, GrowSP-ForMS achieves 84.3% mean accuracy and 69.6% mIoU—improving over the original GrowSP by 29.4 percentage points and yielding a 5.6-point gain attributable specifically to multispectral information—thereby substantially advancing the state of the art in unsupervised leaf-wood segmentation.

Technology Category

Application Category

📝 Abstract
Point clouds captured with laser scanning systems from forest environments can be utilized in a wide variety of applications within forestry and plant ecology, such as the estimation of tree stem attributes, leaf angle distribution, and above-ground biomass. However, effectively utilizing the data in such tasks requires the semantic segmentation of the data into wood and foliage points, also known as leaf-wood separation. The traditional approach to leaf-wood separation has been geometry- and radiometry-based unsupervised algorithms, which tend to perform poorly on data captured with airborne laser scanning (ALS) systems, even with a high point density. While recent machine and deep learning approaches achieve great results even on sparse point clouds, they require manually labeled training data, which is often extremely laborious to produce. Multispectral (MS) information has been demonstrated to have potential for improving the accuracy of leaf-wood separation, but quantitative assessment of its effects has been lacking. This study proposes a fully unsupervised deep learning method, GrowSP-ForMS, which is specifically designed for leaf-wood separation of high-density MS ALS point clouds and based on the GrowSP architecture. GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set, outperforming the unsupervised reference methods by a significant margin. When compared to supervised deep learning methods, our model performed similarly to the slightly older PointNet architecture but was outclassed by more recent approaches. Finally, two ablation studies were conducted, which demonstrated that our proposed changes increased the test set mIoU of GrowSP-ForMS by 29.4 percentage points (pp) in comparison to the original GrowSP model and that utilizing MS data improved the mIoU by 5.6 pp from the monospectral case.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised deep learning for leaf-wood separation
Improving accuracy with multispectral LiDAR data
Addressing challenges in airborne laser scanning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised deep learning for segmentation
Utilizes multispectral LiDAR forest data
Enhances leaf-wood separation accuracy
Lassi Ruoppa
Lassi Ruoppa
Finnish Geospatial Research Institute
O
Oona Oinonen
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute FGI, The National Land Survey of Finland, Vuorimiehentie 5, Espoo, FI-02150, Finland
Josef Taher
Josef Taher
Research Scientist
deep learninghyperspectral lidar
M
Matti Lehtomaki
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute FGI, The National Land Survey of Finland, Vuorimiehentie 5, Espoo, FI-02150, Finland
N
Narges Takhtkeshha
3D Optical Metrology (3DOM) unit, Bruno Kessler Foundation (FBK), Trento, Italy
A
A. Kukko
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute FGI, The National Land Survey of Finland, Vuorimiehentie 5, Espoo, FI-02150, Finland
H
H. Kaartinen
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute FGI, The National Land Survey of Finland, Vuorimiehentie 5, Espoo, FI-02150, Finland
J
Juha Hyyppa
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute FGI, The National Land Survey of Finland, Vuorimiehentie 5, Espoo, FI-02150, Finland; Department of Built Environment, Aalto University, School of Engineering, P.O. Box 11000, Aalto, FI-00076, Finland