🤖 AI Summary
High-quality, diverse, publicly available annotated 3D point cloud datasets for organ-level plant segmentation are scarce. Method: We introduce PLANesT-3D—the first benchmark dataset for semantic and instance segmentation of colored plant point clouds—comprising 34 specimens of chili pepper, miniature rose, and red currant, with fine-grained “leaf/stem” semantic labels and organ-level instance annotations. We further propose SP-LSCnet, an end-to-end deep network integrating unsupervised superpoint extraction (SP) and LSC feature learning, built upon a PointNet++ backbone enhanced with RoseSegNet-inspired architectural optimizations. Contribution/Results: Experiments on PLANesT-3D demonstrate that SP-LSCnet significantly outperforms both PointNet++ and RoseSegNet. This work establishes the first fine-grained plant point cloud segmentation benchmark and delivers a lightweight, transferable segmentation model, thereby laying foundational data and algorithmic groundwork for intelligent agricultural perception.
📝 Abstract
Creation of new annotated public datasets is crucial in helping advances in 3D computer vision and machine learning meet their full potential for automatic interpretation of 3D plant models. In this paper, we introduce PLANesT-3D; a new annotated dataset of 3D color point clouds of plants. PLANesT-3D is composed of 34 point cloud models representing 34 real plants from three different plant species: extit{Capsicum annuum}, extit{Rosa kordana}, and extit{Ribes rubrum}. Both semantic labels in terms of"leaf"and"stem", and organ instance labels were manually annotated for the full point clouds. As an additional contribution, SP-LSCnet, a novel semantic segmentation method that is a combination of unsupervised superpoint extraction and a 3D point-based deep learning approach is introduced and evaluated on the new dataset. Two existing deep neural network architectures, PointNet++ and RoseSegNet were also tested on the point clouds of PLANesT-3D for semantic segmentation.