🤖 AI Summary
Current 3D medical image segmentation backbones exhibit insufficient representational capacity under large-scale data regimes. To address this, we propose MedNeXt-v2—a scalable 3D ConvNeXt architecture featuring compound scaling across depth, width, and receptive field—alongside the first 3D Global Response Normalization (GRN) module. We establish a novel evaluation principle: “downstream performance is predictable from zero-shot pretraining metrics.” Our analysis reveals previously unobserved patterns: pathological segmentation is markedly more sensitive to representation scaling than anatomical segmentation, and modality-specific pretraining yields no significant gain. MedNeXt-v2 is supervised pre-trained on 18k CT volumes and integrated into nnUNet with full-parameter fine-tuning. It achieves state-of-the-art performance across six CT/MR benchmarks encompassing 144 anatomical structures, consistently outperforming seven publicly available pretrained models. Code and pretrained models are publicly released in the official nnUNet repository.
📝 Abstract
Large-scale supervised pretraining is rapidly reshaping 3D medical image segmentation. However, existing efforts focus primarily on increasing dataset size and overlook the question of whether the backbone network is an effective representation learner at scale. In this work, we address this gap by revisiting ConvNeXt-based architectures for volumetric segmentation and introducing MedNeXt-v2, a compound-scaled 3D ConvNeXt that leverages improved micro-architecture and data scaling to deliver state-of-the-art performance. First, we show that routinely used backbones in large-scale pretraining pipelines are often suboptimal. Subsequently, we use comprehensive backbone benchmarking prior to scaling and demonstrate that stronger from scratch performance reliably predicts stronger downstream performance after pretraining. Guided by these findings, we incorporate a 3D Global Response Normalization module and use depth, width, and context scaling to improve our architecture for effective representation learning. We pretrain MedNeXt-v2 on 18k CT volumes and demonstrate state-of-the-art performance when fine-tuning across six challenging CT and MR benchmarks (144 structures), showing consistent gains over seven publicly released pretrained models. Beyond improvements, our benchmarking of these models also reveals that stronger backbones yield better results on similar data, representation scaling disproportionately benefits pathological segmentation, and that modality-specific pretraining offers negligible benefit once full finetuning is applied. In conclusion, our results establish MedNeXt-v2 as a strong backbone for large-scale supervised representation learning in 3D Medical Image Segmentation. Our code and pretrained models are made available with the official nnUNet repository at: https://www.github.com/MIC-DKFZ/nnUNet