🤖 AI Summary
This work addresses the performance bottleneck in cross-domain electron microscopy image segmentation caused by the absence of annotations in the target domain. The authors propose a weakly supervised domain adaptation method leveraging sparse point prompts and local human preferences. By integrating self-training, prompt-guided contrastive learning, and local direct preference optimization within an interactive, prompt-driven multi-task framework, the approach supports diverse annotation scenarios—including fully unlabeled, partially annotated, or fully point-prompted settings. The proposed plug-and-play strategies (LPO, SLPO, UPO) enable flexible deployment and achieve significant performance gains across four cross-domain tasks, outperforming current unsupervised, weakly supervised, and SAM-based methods. Notably, both automatic and interactive variants of the model attain segmentation accuracy comparable to that of fully supervised models.
📝 Abstract
Domain adaptive segmentation (DAS) is a promising paradigm for delineating intracellular structures from various large-scale electron microscopy (EM) without incurring extensive annotated data in each domain. However, the prevalent unsupervised domain adaptation (UDA) strategies often demonstrate limited and biased performance, which hinders their practical applications. In this study, we explore sparse points and local human preferences as weak labels in the target domain, thereby presenting a more realistic yet annotation-efficient setting. Specifically, we develop Prefer-DAS, which pioneers sparse promptable learning and local preference alignment. The Prefer-DAS is a promptable multitask model that integrates self-training and prompt-guided contrastive learning. Unlike SAM-like methods, the Prefer-DAS allows for the use of full, partial, and even no point prompts during both training and inference stages and thus enables interactive segmentation. Instead of using image-level human preference alignment for segmentation, we introduce Local direct Preference Optimization (LPO) and sparse LPO (SLPO), plug-and-play solutions for alignment with spatially varying human feedback or sparse feedback. To address potential missing feedback, we also introduce Unsupervised Preference Optimization (UPO), which leverages self-learned preferences. As a result, the Prefer-DAS model can effectively perform both weakly-supervised and unsupervised DAS, depending on the availability of points and human preferences. Comprehensive experiments on four challenging DAS tasks demonstrate that our model outperforms SAM-like methods as well as unsupervised and weakly-supervised DAS methods in both automatic and interactive segmentation modes, highlighting strong generalizability and flexibility. Additionally, the performance of our model is very close to or even exceeds that of supervised models.