🤖 AI Summary
2D layer-wise segmentation errors lead to over-segmentation in 3D cell image reconstruction. Method: We propose an interpretable cross-layer correction framework that jointly models geometric consistency and 3D topological connectivity across adjacent layers, employing a ResNet-based binary classifier to determine whether cellular structures should be merged across layers. Contribution/Results: This work is the first to incorporate explicit 3D topological constraints into post-processing of 2D segmentation, establishing a dual-driven (geometric + topological) correction mechanism. The framework is plug-and-play adaptable to non-2D-based methods. Pretrained on a plant cell dataset, it achieves zero-shot transfer to animal cell data, significantly reducing over-segmentation rates. Its generic pipeline enables rapid adaptation to arbitrary annotated datasets, demonstrating strong generalizability and practical utility.
📝 Abstract
3D cellular image segmentation methods are commonly divided into non-2D-based and 2D-based approaches, the latter reconstructing 3D shapes from the segmentation results of 2D layers. However, errors in 2D results often propagate, leading to oversegmentations in the final 3D results. To tackle this issue, we introduce an interpretable geometric framework that addresses the oversegmentations by correcting the 2D segmentation results based on geometric information from adjacent layers. Leveraging both geometric (layer-to-layer, 2D) and topological (3D shape) features, we use binary classification to determine whether neighboring cells should be stitched. We develop a pre-trained classifier on public plant cell datasets and validate its performance on animal cell datasets, confirming its effectiveness in correcting oversegmentations under the transfer learning setting. Furthermore, we demonstrate that our framework can be extended to correcting oversegmentation on non-2D-based methods. A clear pipeline is provided for end-users to build the pre-trained model to any labeled dataset.