Simple is what you need for efficient and accurate medical image segmentation

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation models often sacrifice practicality and deployment efficiency in pursuit of higher accuracy. To address this, we propose SimpleUNet—a highly lightweight, scalable segmentation architecture designed for clinical deployment. Built upon the U-Net backbone, it supports end-to-end training and introduces three key innovations: (1) a partial feature selection mechanism within skip connections to suppress redundancy; (2) a fixed-width network design ensuring structural simplicity and predictability; and (3) a lightweight adaptive feature fusion module to enhance representational capacity. Experimental results demonstrate that a 16 KB configuration surpasses LBUNet, while the 0.67 MB variant achieves an mDSC of 85.76% on a breast lesion dataset. Moreover, it significantly outperforms both U-Net and TransUNet on skin lesion and polyp segmentation tasks. SimpleUNet thus achieves a balanced trade-off between segmentation accuracy and computational efficiency, enabling real-world clinical adoption.

Technology Category

Application Category

📝 Abstract
While modern segmentation models often prioritize performance over practicality, we advocate a design philosophy prioritizing simplicity and efficiency, and attempted high performance segmentation model design. This paper presents SimpleUNet, a scalable ultra-lightweight medical image segmentation model with three key innovations: (1) A partial feature selection mechanism in skip connections for redundancy reduction while enhancing segmentation performance; (2) A fixed-width architecture that prevents exponential parameter growth across network stages; (3) An adaptive feature fusion module achieving enhanced representation with minimal computational overhead. With a record-breaking 16 KB parameter configuration, SimpleUNet outperforms LBUNet and other lightweight benchmarks across multiple public datasets. The 0.67 MB variant achieves superior efficiency (8.60 GFLOPs) and accuracy, attaining a mean DSC/IoU of 85.76%/75.60% on multi-center breast lesion datasets, surpassing both U-Net and TransUNet. Evaluations on skin lesion datasets (ISIC 2017/2018: mDice 84.86%/88.77%) and endoscopic polyp segmentation (KVASIR-SEG: 86.46%/76.48% mDice/mIoU) confirm consistent dominance over state-of-the-art models. This work demonstrates that extreme model compression need not compromise performance, providing new insights for efficient and accurate medical image segmentation. Codes can be found at https://github.com/Frankyu5666666/SimpleUNet.
Problem

Research questions and friction points this paper is trying to address.

Designing ultra-lightweight medical image segmentation models
Reducing redundancy while enhancing segmentation performance
Achieving high efficiency without compromising accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partial feature selection in skip connections
Fixed-width architecture preventing parameter explosion
Adaptive feature fusion with low overhead
🔎 Similar Papers
No similar papers found.
X
Xiang Yu
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
Y
Yayan Chen
Ultrasound Department of Longhua District Maternal and Child Healthcare Hospital, Shenzhen, China.
Guannan He
Guannan He
Peking University
Energy SystemMobilityEnergy StorageOptimization
Q
Qing Zeng
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
Y
Yue Qin
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
M
Meiling Liang
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
D
Dandan Luo
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
Y
Yimei Liao
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.
Z
Zeyu Ren
College of Agronomy, Jilin Agricultural University, Changchun, China.
C
Cheng Kang
Department of Cybernetics and Robotics, Czech Technical University, Prague, Czech Republic.
D
Delong Yang
Institute for Engineering Medicine, Kunming Medical University, Kunming, China.
Bocheng Liang
Bocheng Liang
Shenzhen Maternal and Child Health Centre, Southern Medical University
Prenatal UltrasoundPrenatal DiagnosisArtificial IntelligenceMedical Image Processing
Bin Pu
Bin Pu
The Hong Kong University of Science and Technology | HNU | NTU
Computer visionMedical image analysisUltrasound image processingAI4Science
Ying Yuan
Ying Yuan
Carnegie Mellon University
Robot learning
Shengli Li
Shengli Li
Department of Ultrasound, Shenzhen Maternity and Child Healthcare Hospital, Women and Children’s Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.