Data Efficiency and Transfer Robustness in Biomedical Image Segmentation: A Study of Redundancy and Forgetting with Cellpose

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two key challenges in applying Cellpose to biomedical image segmentation: data redundancy and catastrophic forgetting during cross-domain transfer. To tackle these, we propose a Data Quantization (DQ) strategy and a selective replay mechanism. Leveraging MAE embeddings and t-SNE analysis of the latent space, we identify that only 10% representative samples suffice to achieve performance saturation—significantly improving training efficiency and feature diversity. In multi-stage cross-domain transfer, replaying just 5–10% of source-domain data effectively mitigates catastrophic forgetting and enables optimized domain transfer ordering. Extensive experiments on the Cyto dataset validate the efficacy of our approach; the code is publicly available. Our core contributions are: (i) the first systematic characterization of Cellpose’s data redundancy boundary, and (ii) a lightweight transfer learning paradigm that jointly optimizes data efficiency and knowledge retention.

Technology Category

Application Category

📝 Abstract
Generalist biomedical image segmentation models such as Cellpose are increasingly applied across diverse imaging modalities and cell types. However, two critical challenges remain underexplored: (1) the extent of training data redundancy and (2) the impact of cross domain transfer on model retention. In this study, we conduct a systematic empirical analysis of these challenges using Cellpose as a case study. First, to assess data redundancy, we propose a simple dataset quantization (DQ) strategy for constructing compact yet diverse training subsets. Experiments on the Cyto dataset show that image segmentation performance saturates with only 10% of the data, revealing substantial redundancy and potential for training with minimal annotations. Latent space analysis using MAE embeddings and t-SNE confirms that DQ selected patches capture greater feature diversity than random sampling. Second, to examine catastrophic forgetting, we perform cross domain finetuning experiments and observe significant degradation in source domain performance, particularly when adapting from generalist to specialist domains. We demonstrate that selective DQ based replay reintroducing just 5-10% of the source data effectively restores source performance, while full replay can hinder target adaptation. Additionally, we find that training domain sequencing improves generalization and reduces forgetting in multi stage transfer. Our findings highlight the importance of data centric design in biomedical image segmentation and suggest that efficient training requires not only compact subsets but also retention aware learning strategies and informed domain ordering. The code is available at https://github.com/MMV-Lab/biomedseg-efficiency.
Problem

Research questions and friction points this paper is trying to address.

Assessing training data redundancy in biomedical image segmentation models
Examining catastrophic forgetting during cross-domain transfer learning
Developing strategies to maintain source performance while adapting to new domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset quantization strategy for compact training subsets
Selective replay method to prevent catastrophic forgetting
Training domain sequencing to improve generalization
🔎 Similar Papers
2024-08-29IEEE International Conference on Bioinformatics and BiomedicineCitations: 5