Diffusion-Guided Mask-Consistent Paired Mixing for Endoscopic Image Segmentation

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address label ambiguity and domain shift induced by conventional data augmentation in endoscopic image segmentation, this paper proposes a novel paradigm integrating mask-consistent paired mixing with diffusion-based synthesis. Our method tackles these issues through two key innovations: (1) a mask-consistent paired mixing mechanism that fuses the visual appearances of multiple real images under a shared ground-truth mask, ensuring pixel-level semantic consistency; and (2) a real-anchor-based learnable annealing strategy that dynamically modulates mixing intensity and loss weighting to enable smooth transition between synthetic and real data. We employ a diffusion model to generate mask-conditioned synthetic images and jointly optimize all components in an end-to-end manner. Extensive experiments demonstrate state-of-the-art performance across five benchmark datasets—Kvasir-SEG, PICCOLO, CVC-ClinicDB, NPC-LES, and ISIC 2017—significantly outperforming mainstream baselines.

Technology Category

Application Category

📝 Abstract
Augmentation for dense prediction typically relies on either sample mixing or generative synthesis. Mixing improves robustness but misaligned masks yield soft label ambiguity. Diffusion synthesis increases apparent diversity but, when trained as common samples, overlooks the structural benefit of mask conditioning and introduces synthetic-real domain shift. We propose a paired, diffusion-guided paradigm that fuses the strengths of both. For each real image, a synthetic counterpart is generated under the same mask and the pair is used as a controllable input for Mask-Consistent Paired Mixing (MCPMix), which mixes only image appearance while supervision always uses the original hard mask. This produces a continuous family of intermediate samples that smoothly bridges synthetic and real appearances under shared geometry, enlarging diversity without compromising pixel-level semantics. To keep learning aligned with real data, Real-Anchored Learnable Annealing (RLA) adaptively adjusts the mixing strength and the loss weight of mixed samples over training, gradually re-anchoring optimization to real data and mitigating distributional bias. Across Kvasir-SEG, PICCOLO, CVC-ClinicDB, a private NPC-LES cohort, and ISIC 2017, the approach achieves state-of-the-art segmentation performance and consistent gains over baselines. The results show that combining label-preserving mixing with diffusion-driven diversity, together with adaptive re-anchoring, yields robust and generalizable endoscopic segmentation.
Problem

Research questions and friction points this paper is trying to address.

Addresses soft label ambiguity from misaligned masks in augmentation mixing
Reduces synthetic-real domain shift in diffusion-based endoscopic image synthesis
Enhances segmentation robustness through geometry-preserving appearance mixing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-guided paired mixing under same mask
Mask-consistent mixing preserves original hard supervision
Real-anchored annealing adaptively adjusts mixing strength
🔎 Similar Papers
No similar papers found.
Pengyu Jie
Pengyu Jie
School of intelligent engineering, Sun Yat-sen University
machine learningmedical imagingcomputer vision
Wanquan Liu
Wanquan Liu
Sun Yat-sen University
Computer visionIntelligent controlPattern recognition
R
Rui He
Department of Otolaryngology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
Y
Yihui Wen
Department of Otolaryngology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
Deyu Meng
Deyu Meng
Professor, Xi'an Jiaotong University
Machine LearningApplied MathematicsComputer VisionArtificial Intelligence
C
Chenqiang Gao
School of Intelligent Engineering, Sun Yat-sen University (Shenzhen Campus), Shenzhen 518107, China