π€ AI Summary
Deep learning models suffer performance degradation when transferred across microscopy platforms due to domain shift, while full-network fine-tuning risks semantic feature drift. To address this, we propose SIT-ADDA-Autoβa lightweight, unsupervised domain adaptation framework that adapts only shallow convolutional layers while freezing the deep semantic network. It introduces, for the first time, a prediction-uncertainty-driven mechanism to automatically select the optimal adaptation depth, integrated with shallow-layer adversarial feature alignment and subnetwork-based image translation. The method operates without target-domain labels and significantly improves fluorescence image reconstruction quality and downstream segmentation accuracy (mDice β4.2%) under multi-instrument, multi-staining, and variable-exposure conditions. Semantic feature stability is enhanced, and expert blind evaluation confirms its robustness and reliability. SIT-ADDA-Auto establishes an interpretable, minimally invasive design paradigm for unsupervised microscopy image transfer.
π Abstract
Deep learning is transforming microscopy, yet models often fail when applied to images from new instruments or acquisition settings. Conventional adversarial domain adaptation (ADDA) retrains entire networks, often disrupting learned semantic representations. Here, we overturn this paradigm by showing that adapting only the earliest convolutional layers, while freezing deeper layers, yields reliable transfer. Building on this principle, we introduce Subnetwork Image Translation ADDA with automatic depth selection (SIT-ADDA-Auto), a self-configuring framework that integrates shallow-layer adversarial alignment with predictive uncertainty to automatically select adaptation depth without target labels. We demonstrate robustness via multi-metric evaluation, blinded expert assessment, and uncertainty-depth ablations. Across exposure and illumination shifts, cross-instrument transfer, and multiple stains, SIT-ADDA improves reconstruction and downstream segmentation over full-encoder adaptation and non-adversarial baselines, with reduced drift of semantic features. Our results provide a design rule for label-free adaptation in microscopy and a recipe for field settings; the code is publicly available.