🤖 AI Summary
This work addresses the challenge of cell instance segmentation, which is hindered by the scarcity of high-quality annotated microscopy images and the substantial domain shift between natural-image pre-trained models and microscopic data. To bridge this gap, the authors propose DINOCell, a novel framework that, for the first time, adapts the self-supervised representations of DINOv2 to the microscopy domain through continual pre-training, followed by supervised fine-tuning to achieve high-performance segmentation. This approach effectively mitigates the mismatch between natural-image priors and microscopic image characteristics, significantly enhancing cross-domain generalization. On the LIVECell benchmark, DINOCell achieves a SEG score of 0.784, surpassing the current best SAM-based model by 10.42%, and demonstrates strong zero-shot transfer performance across three out-of-distribution datasets.
📝 Abstract
Instance segmentation enables the analysis of spatial and temporal properties of cells in microscopy images by identifying the pixels belonging to each cell. However, progress is constrained by the scarcity of high-quality labeled microscopy datasets. Many recent approaches address this challenge by initializing models with segmentation-pretrained weights from large-scale natural-image models such as Segment Anything Model (SAM). However, representations learned from natural images often encode objectness and texture priors that are poorly aligned with microscopy data, leading to degraded performance under domain shift. We propose DINOCell, a self-supervised framework for cell instance segmentation that leverages representations from DINOv2 and adapts them to microscopy through continued self-supervised training on unlabeled cell images prior to supervised fine-tuning. On the LIVECell benchmark, DINOCell achieves a SEG score of 0.784, improving by 10.42% over leading SAM-based models, and demonstrates strong zero-shot performance on three out-of-distribution microscopy datasets. These results highlight the benefits of domain-adapted self-supervised pretraining for robust cell segmentation.