🤖 AI Summary
State-of-the-art medical image segmentation models (e.g., nnUNet) struggle to balance accuracy and computational efficiency. Method: This paper introduces nnUZoo, an open-source benchmark framework that systematically evaluates CNNs, Transformers, and Mamba architectures across six multimodal (CT/MRI/ultrasound) segmentation tasks. It proposes the X2Net family—including SS2D2Net—the first Mamba-based segmentation architectures, integrating state-space modeling into medical imaging. A fair, reproducible cross-architecture evaluation protocol is established, incorporating U2Net, UNETR, SwinUMamba, and others, with Dice score and computational cost as dual primary metrics. Contribution/Results: Mamba models achieve nnUNet/U2Net-level Dice scores while reducing parameters by 37%, demonstrating high accuracy–efficiency potential. CNNs retain overall practical advantages, whereas Transformers exhibit significant computational bottlenecks. nnUZoo provides a standardized, extensible platform for architecture evaluation in medical segmentation.
📝 Abstract
While numerous architectures for medical image segmentation have been proposed, achieving competitive performance with state-of-the-art models networks such as nnUNet, still leave room for further innovation. In this work, we introduce nnUZoo, an open source benchmarking framework built upon nnUNet, which incorporates various deep learning architectures, including CNNs, Transformers, and Mamba-based models. Using this framework, we provide a fair comparison to demystify performance claims across different medical image segmentation tasks. Additionally, in an effort to enrich the benchmarking, we explored five new architectures based on Mamba and Transformers, collectively named X2Net, and integrated them into nnUZoo for further evaluation. The proposed models combine the features of conventional U2Net, nnUNet, CNN, Transformer, and Mamba layers and architectures, called X2Net (UNETR2Net (UNETR), SwT2Net (SwinTransformer), SS2D2Net (SwinUMamba), Alt1DM2Net (LightUMamba), and MambaND2Net (MambaND)). We extensively evaluate the performance of different models on six diverse medical image segmentation datasets, including microscopy, ultrasound, CT, MRI, and PET, covering various body parts, organs, and labels. We compare their performance, in terms of dice score and computational efficiency, against their baseline models, U2Net, and nnUNet. CNN models like nnUNet and U2Net demonstrated both speed and accuracy, making them effective choices for medical image segmentation tasks. Transformer-based models, while promising for certain imaging modalities, exhibited high computational costs. Proposed Mamba-based X2Net architecture (SS2D2Net) achieved competitive accuracy with no significantly difference from nnUNet and U2Net, while using fewer parameters. However, they required significantly longer training time, highlighting a trade-off between model efficiency and computational cost.