🤖 AI Summary
To address poor model generalization caused by cross-center data distribution shifts in colorectal cancer early screening and the inability of conventional data augmentation to generate high-fidelity medical images, this paper proposes the Progressive Spectral Diffusion Model (PSDM). Methodologically, PSDM innovatively compiles multi-source clinical annotations—including segmentation masks, bounding boxes, and endoscopic reports—into coarse-to-fine compositional prompts, enabling joint semantic and spatial-structural modeling and overcoming limitations of single-mask conditioning. It further integrates text–image alignment encoding, progressive conditional generation, and multi-task learning for detection, classification, and segmentation. On the PolypGen dataset, PSDM achieves +2.12% F1-score and +3.09% mAP over baselines. Clinically validated synthetic images demonstrate diagnostic reliability, significantly improving out-of-distribution robustness and cross-center adaptability.
📝 Abstract
Colorectal cancer (CRC) is a significant global health concern, and early detection through screening plays a critical role in reducing mortality. While deep learning models have shown promise in improving polyp detection, classification, and segmentation, their generalization across diverse clinical environments, particularly with out-of-distribution (OOD) data, remains a challenge. Multi-center datasets like PolypGen have been developed to address these issues, but their collection is costly and time-consuming. Traditional data augmentation techniques provide limited variability, failing to capture the complexity of medical images. Diffusion models have emerged as a promising solution for generating synthetic polyp images, but the image generation process in current models mainly relies on segmentation masks as the condition, limiting their ability to capture the full clinical context. To overcome these limitations, we propose a Progressive Spectrum Diffusion Model (PSDM) that integrates diverse clinical annotations-such as segmentation masks, bounding boxes, and colonoscopy reports-by transforming them into compositional prompts. These prompts are organized into coarse and fine components, allowing the model to capture both broad spatial structures and fine details, generating clinically accurate synthetic images. By augmenting training data with PSDM-generated samples, our model significantly improves polyp detection, classification, and segmentation. For instance, on the PolypGen dataset, PSDM increases the F1 score by 2.12% and the mean average precision by 3.09%, demonstrating superior performance in OOD scenarios and enhanced generalization.