🤖 AI Summary
In medical image segmentation, ambiguous and discontinuous boundary localization undermines clinical interpretability. To address this, we propose a boundary-aware dual-path learning framework that, for the first time, decouples boundary modeling into directional gradient supervision and topology-preserving boundary refinement—overcoming the limitations of implicit boundary learning inherent in conventional cross-entropy loss. Our method integrates a directionally weighted boundary loss, a differentiable morphological boundary enhancement operator, and a boundary confidence-gated fusion mechanism, all embedded within a U-Net++ backbone. Evaluated on BraTS, ACDC, and MoNuSeg benchmarks, our approach achieves 6.2–9.8% improvement in boundary F1-score and an average 2.4% gain in Dice coefficient, significantly enhancing geometric consistency and clinical reliability of tumor and organ boundaries.