Reducing Domain Gap with Diffusion-Based Domain Adaptation for Cell Counting

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of annotated real microscopic images and the substantial texture/structural domain gap between synthetic and real images—which undermines domain adaptation—this paper proposes Inversion-based Style Transfer (InST), a diffusion-model-driven approach. InST is the first to adapt inversion-based style transfer to biomedical microscopy, integrating latent-space adaptive instance normalization (AdaIN) with diffusion-based stochastic inversion to achieve weak structural preservation and high-fidelity style transfer for synthetic image enhancement. Combined with domain-adaptive augmentation (DACS/CutMix) and EfficientNet-B0 fine-tuning, InST reduces the mean absolute error (MAE) on Cell200-s from 53.70 to 25.95 (−52%), significantly outperforming both hand-crafted synthetic data (−37%) and training solely on real annotations (25.95 vs. 27.74).

Technology Category

Application Category

📝 Abstract
Generating realistic synthetic microscopy images is critical for training deep learning models in label-scarce environments, such as cell counting with many cells per image. However, traditional domain adaptation methods often struggle to bridge the domain gap when synthetic images lack the complex textures and visual patterns of real samples. In this work, we adapt the Inversion-Based Style Transfer (InST) framework originally designed for artistic style transfer to biomedical microscopy images. Our method combines latent-space Adaptive Instance Normalization with stochastic inversion in a diffusion model to transfer the style from real fluorescence microscopy images to synthetic ones, while weakly preserving content structure. We evaluate the effectiveness of our InST-based synthetic dataset for downstream cell counting by pre-training and fine-tuning EfficientNet-B0 models on various data sources, including real data, hard-coded synthetic data, and the public Cell200-s dataset. Models trained with our InST-synthesized images achieve up to 37% lower Mean Absolute Error (MAE) compared to models trained on hard-coded synthetic data, and a 52% reduction in MAE compared to models trained on Cell200-s (from 53.70 to 25.95 MAE). Notably, our approach also outperforms models trained on real data alone (25.95 vs. 27.74 MAE). Further improvements are achieved when combining InST-synthesized data with lightweight domain adaptation techniques such as DACS with CutMix. These findings demonstrate that InST-based style transfer most effectively reduces the domain gap between synthetic and real microscopy data. Our approach offers a scalable path for enhancing cell counting performance while minimizing manual labeling effort. The source code and resources are publicly available at: https://github.com/MohammadDehghan/InST-Microscopy.
Problem

Research questions and friction points this paper is trying to address.

Reducing domain gap between synthetic and real microscopy images
Improving cell counting accuracy with style-transferred synthetic data
Minimizing manual labeling effort for deep learning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapting artistic style transfer to biomedical microscopy images
Using diffusion model with stochastic inversion for style transfer
Combining InST-synthesized data with lightweight domain adaptation techniques
🔎 Similar Papers
No similar papers found.