🤖 AI Summary
Medical image segmentation models often sacrifice practicality and deployment efficiency in pursuit of higher accuracy. To address this, we propose SimpleUNet—a highly lightweight, scalable segmentation architecture designed for clinical deployment. Built upon the U-Net backbone, it supports end-to-end training and introduces three key innovations: (1) a partial feature selection mechanism within skip connections to suppress redundancy; (2) a fixed-width network design ensuring structural simplicity and predictability; and (3) a lightweight adaptive feature fusion module to enhance representational capacity. Experimental results demonstrate that a 16 KB configuration surpasses LBUNet, while the 0.67 MB variant achieves an mDSC of 85.76% on a breast lesion dataset. Moreover, it significantly outperforms both U-Net and TransUNet on skin lesion and polyp segmentation tasks. SimpleUNet thus achieves a balanced trade-off between segmentation accuracy and computational efficiency, enabling real-world clinical adoption.
📝 Abstract
While modern segmentation models often prioritize performance over practicality, we advocate a design philosophy prioritizing simplicity and efficiency, and attempted high performance segmentation model design. This paper presents SimpleUNet, a scalable ultra-lightweight medical image segmentation model with three key innovations: (1) A partial feature selection mechanism in skip connections for redundancy reduction while enhancing segmentation performance; (2) A fixed-width architecture that prevents exponential parameter growth across network stages; (3) An adaptive feature fusion module achieving enhanced representation with minimal computational overhead. With a record-breaking 16 KB parameter configuration, SimpleUNet outperforms LBUNet and other lightweight benchmarks across multiple public datasets. The 0.67 MB variant achieves superior efficiency (8.60 GFLOPs) and accuracy, attaining a mean DSC/IoU of 85.76%/75.60% on multi-center breast lesion datasets, surpassing both U-Net and TransUNet. Evaluations on skin lesion datasets (ISIC 2017/2018: mDice 84.86%/88.77%) and endoscopic polyp segmentation (KVASIR-SEG: 86.46%/76.48% mDice/mIoU) confirm consistent dominance over state-of-the-art models. This work demonstrates that extreme model compression need not compromise performance, providing new insights for efficient and accurate medical image segmentation. Codes can be found at https://github.com/Frankyu5666666/SimpleUNet.