๐ค AI Summary
Medical image segmentation is often degraded by ambiguous boundaries and background noise, leading to reduced accuracy. To address this, we propose an edge-prompt enhanced model based on a learnable gating mechanism. Our method introduces three key innovations: (1) an edge-aware enhancement unit that explicitly models boundary structures; (2) a multi-scale prompt generation unit that jointly captures local details and global semantics; and (3) a dual-source adaptive gating fusion mechanism that dynamically optimizes the synergy between edge cues and multi-scale features. The model integrates multi-frequency feature extraction, prompt-guided fusion, and multi-scale aggregation. Evaluated on standard benchmarks including ISIC2018, it achieves an average 2.3% improvement in Dice score and an 18.7% reduction in boundary localization error over state-of-the-art methods, significantly enhancing segmentation robustness and clinical applicability.
๐ Abstract
Medical image segmentation is vital for diagnosis, treatment planning, and disease monitoring but is challenged by complex factors like ambiguous edges and background noise. We introduce EEMS, a new model for segmentation, combining an Edge-Aware Enhancement Unit (EAEU) and a Multi-scale Prompt Generation Unit (MSPGU). EAEU enhances edge perception via multi-frequency feature extraction, accurately defining boundaries. MSPGU integrates high-level semantic and low-level spatial features using a prompt-guided approach, ensuring precise target localization. The Dual-Source Adaptive Gated Fusion Unit (DAGFU) merges edge features from EAEU with semantic features from MSPGU, enhancing segmentation accuracy and robustness. Tests on datasets like ISIC2018 confirm EEMS's superior performance and reliability as a clinical tool.