SELMA3D challenge: Self-supervised learning for 3D light-sheet microscopy image segmentation

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Domain shift severely limits generalizability in 3D light-sheet microscopy (LSM) image segmentation of large-scale biological tissues. Method: We introduce the first MICCAI challenge dataset tailored for self-supervised learning, comprising 35 ultra-large unlabeled 3D volumes (>1000³ voxels) of mouse and human brain tissue and 315 finely annotated subvolumes. We pioneer the integration of contrastive learning and masked autoencoding for 3D LSM segmentation, proposing a novel cross-sample, cross-structure (vasculature-/point-like) representation learning paradigm. Contribution/Results: This establishes the first biomedical 3D self-supervised segmentation benchmark. Participating methods include ViT- and U-Net-based architectures and voxel-level reconstruction tasks. On unseen test data, our approach achieves an average Dice score improvement of 12.6% over fully supervised baselines—demonstrating that large-scale unlabeled 3D biomedical imagery significantly enhances robustness and generalizability of segmentation models.

Technology Category

Application Category

📝 Abstract
Recent innovations in light sheet microscopy, paired with developments in tissue clearing techniques, enable the 3D imaging of large mammalian tissues with cellular resolution. Combined with the progress in large-scale data analysis, driven by deep learning, these innovations empower researchers to rapidly investigate the morphological and functional properties of diverse biological samples. Segmentation, a crucial preliminary step in the analysis process, can be automated using domain-specific deep learning models with expert-level performance. However, these models exhibit high sensitivity to domain shifts, leading to a significant drop in accuracy when applied to data outside their training distribution. To address this limitation, and inspired by the recent success of self-supervised learning in training generalizable models, we organized the SELMA3D Challenge during the MICCAI 2024 conference. SELMA3D provides a vast collection of light-sheet images from cleared mice and human brains, comprising 35 large 3D images-each with over 1000^3 voxels-and 315 annotated small patches for finetuning, preliminary testing and final testing. The dataset encompasses diverse biological structures, including vessel-like and spot-like structures. Five teams participated in all phases of the challenge, and their proposed methods are reviewed in this paper. Quantitative and qualitative results from most participating teams demonstrate that self-supervised learning on large datasets improves segmentation model performance and generalization. We will continue to support and extend SELMA3D as an inaugural MICCAI challenge focused on self-supervised learning for 3D microscopy image segmentation.
Problem

Research questions and friction points this paper is trying to address.

3D microscopy image segmentation
accuracy degradation
adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised Learning
3D Microscopy Image Segmentation
SELMA3D Challenge
🔎 Similar Papers
No similar papers found.
Y
Ying Chen
Institute for Stroke and Dementia Research, Klinikum der Universitaet Muenchen, Ludwig-Maximilians University Munich, Munich, Germany; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany
Rami Al-Maskari
Rami Al-Maskari
Ph.D. Student of Computer Science, Technical University of Munich, Helmholtz Zentrum München
Image recognitiondeep learninginstance segmentationneuron tracing
I
I. Horvath
Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany; TUM School of Computation, Information and Technology (CIT), Technical University of Munich, Munich, Germany
Mayar Ali
Mayar Ali
Helmholtz Munich, Ludwig Maximilian University of Munich
machine learninggraph learningmulti-omicscomputational biology
L
Luciano Hoher
Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany
K
Kaiyuan Yang
Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
Z
Zengming Lin
Shanghai University of Finance and Economics, Shanghai, China
Zhiwei Zhai
Zhiwei Zhai
BGI research
Medical image analysismachine learning
M
Mengzhe Shen
BGI Research, Shenzhen, China
D
Dejin Xun
National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
Y
Yi Wang
National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
Tony Xu
Tony Xu
University of Toronto
Computer VisionMedical ImagingDeep Learning
Maged Goubran
Maged Goubran
Canada Research Chair in AI and Computational Neuroscience, University of Toronto
Computational NeuroscienceArtificial IntelligenceNeuroimagingNeuromodulationConnectomics
Y
Yunheng Wu
Graduate School of Informatics, Nagoya University, Nagoya, Japan
J
J. Paetzold
Department of Computing, Imperial College London, United Kingdom; Department of Radiology, Weill Cornell Medicine, Cornell University, New York, United States
A
Ali Erturk
Institute for Stroke and Dementia Research, Klinikum der Universitaet Muenchen, Ludwig-Maximilians University Munich, Munich, Germany; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany