🤖 AI Summary
In digital dentistry, the scarcity of annotated data for cone-beam computed tomography (CBCT) and intraoral scanning (IOS) impedes automated root canal segmentation and cross-modal registration. Method: We propose the first systematic semi-supervised learning benchmark for tooth-pulp segmentation and CBCT-IOS registration, releasing a large-scale dataset with both labeled and unlabeled samples. Our approach integrates nnU-Net with Mamba-inspired state-space models, augmented by pseudo-labeling, consistency regularization, a differentiable PointNetLK–SVD registration framework, and geometric augmentation. Results: The method achieves a Dice score of 0.967 for pulp segmentation and an instance-level association F1 score of 0.738; cross-modal registration attains high accuracy even with minimal labeled data. All top-performing methods are open-sourced and fully reproducible, establishing a standardized evaluation protocol and facilitating clinical deployment of semi-supervised learning in digital dentistry.
📝 Abstract
Cone-Beam Computed Tomography (CBCT) and Intraoral Scanning (IOS) are essential for digital dentistry, but annotated data scarcity limits automated solutions for pulp canal segmentation and cross-modal registration. To benchmark semi-supervised learning (SSL) in this domain, we organized the STSR 2025 Challenge at MICCAI 2025, featuring two tasks: (1) semi-supervised segmentation of teeth and pulp canals in CBCT, and (2) semi-supervised rigid registration of CBCT and IOS. We provided 60 labeled and 640 unlabeled IOS samples, plus 30 labeled and 250 unlabeled CBCT scans with varying resolutions and fields of view. The challenge attracted strong community participation, with top teams submitting open-source deep learning-based SSL solutions. For segmentation, leading methods used nnU-Net and Mamba-like State Space Models with pseudo-labeling and consistency regularization, achieving a Dice score of 0.967 and Instance Affinity of 0.738 on the hidden test set. For registration, effective approaches combined PointNetLK with differentiable SVD and geometric augmentation to handle modality gaps; hybrid neural-classical refinement enabled accurate alignment despite limited labels. All data and code are publicly available at https://github.com/ricoleehduu/STS-Challenge-2025 to ensure reproducibility.