DCFFSNet: Deep Connectivity Feature Fusion Separation Network for Medical Image Segmentation

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address edge ambiguity and regional discontinuity in medical image segmentation, existing methods forcibly embed topological connectivity into feature extraction modules, resulting in coupled features and the absence of intensity quantification mechanisms. This paper proposes the Deep Connectivity Feature Fusion-Separation Network (DCFS-Net), which introduces a novel feature-space decoupling strategy to explicitly model and quantify the relative strength of connectivity features against other multi-scale features. A dynamic fusion-separation architecture is further designed to enable adaptive balancing of feature representations. Evaluated on ISIC2018, DSB2018, and MoNuSeg benchmarks, DCFS-Net consistently outperforms state-of-the-art models, achieving up to 1.3% and 1.2% improvements in Dice and IoU scores, respectively. The method significantly suppresses segmentation fragmentation while enhancing edge continuity and regional coherence, thereby improving clinical utility.

Technology Category

Application Category

📝 Abstract
Medical image segmentation leverages topological connectivity theory to enhance edge precision and regional consistency. However, existing deep networks integrating connectivity often forcibly inject it as an additional feature module, resulting in coupled feature spaces with no standardized mechanism to quantify different feature strengths. To address these issues, we propose DCFFSNet (Dual-Connectivity Feature Fusion-Separation Network). It introduces an innovative feature space decoupling strategy. This strategy quantifies the relative strength between connectivity features and other features. It then builds a deep connectivity feature fusion-separation architecture. This architecture dynamically balances multi-scale feature expression. Experiments were conducted on the ISIC2018, DSB2018, and MoNuSeg datasets. On ISIC2018, DCFFSNet outperformed the next best model (CMUNet) by 1.3% (Dice) and 1.2% (IoU). On DSB2018, it surpassed TransUNet by 0.7% (Dice) and 0.9% (IoU). On MoNuSeg, it exceeded CSCAUNet by 0.8% (Dice) and 0.9% (IoU). The results demonstrate that DCFFSNet exceeds existing mainstream methods across all metrics. It effectively resolves segmentation fragmentation and achieves smooth edge transitions. This significantly enhances clinical usability.
Problem

Research questions and friction points this paper is trying to address.

Decouples connectivity and other features in medical image segmentation
Quantifies relative feature strengths to improve segmentation accuracy
Reduces segmentation fragmentation for smoother edge transitions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature space decoupling strategy for connectivity
Dynamic multi-scale feature expression balancing
Deep connectivity fusion-separation architecture design
X
Xun Ye
School of Software, Yunnan University, Kunming 650500, China
Ruixiang Tang
Ruixiang Tang
Rutgers University
Machine LearningHealthcare
Mingda Zhang
Mingda Zhang
Google DeepMind
Computer VisionMulti-modal UnderstandingVideo Generation
J
Jianglong Qin
School of Software, Yunnan Provincial Key Laboratory of Software Engineering, Yunnan University, Kunming 650500, China