Semi-Supervised Biomedical Image Segmentation via Diffusion Models and Teacher-Student Co-Training

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalizability of supervised models in medical image segmentation due to scarce annotated data, this paper proposes a semi-supervised framework integrating denoising diffusion probabilistic models (DDPMs) with teacher–student collaborative training. Its key contributions are: (1) the first application of DDPMs to generate semantic segmentation masks, enabling structure-aware pseudo-label modeling; (2) a noise-reconstruction cycle-consistency pretraining mechanism to enhance the teacher model’s unsupervised representation learning capability; and (3) a multi-round iterative pseudo-label refinement pipeline incorporating confidence-based filtering and cross-modality consistency constraints. Evaluated on multiple cross-modality medical segmentation benchmarks, the method achieves over 96% of fully supervised performance using only 10% labeled data, significantly outperforming state-of-the-art semi-supervised approaches.

Technology Category

Application Category

📝 Abstract
Supervised deep learning for semantic segmentation has achieved excellent results in accurately identifying anatomical and pathological structures in medical images. However, it often requires large annotated training datasets, which limits its scalability in clinical settings. To address this challenge, semi-supervised learning is a well-established approach that leverages both labeled and unlabeled data. In this paper, we introduce a novel semi-supervised teacher-student framework for biomedical image segmentation, inspired by the recent success of generative models. Our approach leverages denoising diffusion probabilistic models (DDPMs) to generate segmentation masks by progressively refining noisy inputs conditioned on the corresponding images. The teacher model is first trained in an unsupervised manner using a cycle-consistency constraint based on noise-corrupted image reconstruction, enabling it to generate informative semantic masks. Subsequently, the teacher is integrated into a co-training process with a twin-student network. The student learns from ground-truth labels when available and from teacher-generated pseudo-labels otherwise, while the teacher continuously improves its pseudo-labeling capabilities. Finally, to further enhance performance, we introduce a multi-round pseudo-label generation strategy that iteratively improves the pseudo-labeling process. We evaluate our approach on multiple biomedical imaging benchmarks, spanning multiple imaging modalities and segmentation tasks. Experimental results show that our method consistently outperforms state-of-the-art semi-supervised techniques, highlighting its effectiveness in scenarios with limited annotated data. The code to replicate our experiments can be found at https://github.com/ciampluca/diffusion_semi_supervised_biomedical_image_segmentation
Problem

Research questions and friction points this paper is trying to address.

Reducing reliance on large annotated medical image datasets
Improving segmentation accuracy with limited labeled data
Enhancing pseudo-label generation via diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for mask generation
Implements teacher-student co-training framework
Introduces multi-round pseudo-label refinement
🔎 Similar Papers
No similar papers found.