🤖 AI Summary
Medical image segmentation faces key challenges including excessive model parameters, high computational cost, heavy reliance on large-scale annotated datasets, and dependence on pretraining. To address these, we propose LiteNeXt—a lightweight, end-to-end trainable architecture. Its minimalist encoder is built upon ConvMixer, paired with a streamlined decoder, yielding only 0.71M parameters and 0.42G FLOPs. We introduce a novel marginal weight loss function to explicitly model ambiguous lesion boundaries, and propose a self-embedded representation parallel mechanism for self-supervised augmentation without external data. Evaluated on six major medical segmentation benchmarks—including Data Science Bowl, GlaS, and ISIC2018—LiteNeXt consistently outperforms state-of-the-art CNN- and Transformer-based models. It achieves superior trade-offs among segmentation accuracy, parameter efficiency, and computational throughput, enabling effective training from scratch.
📝 Abstract
The emergence of deep learning techniques has advanced the image segmentation task, especially for medical images. Many neural network models have been introduced in the last decade bringing the automated segmentation accuracy close to manual segmentation. However, cutting-edge models like Transformer-based architectures rely on large scale annotated training data, and are generally designed with densely consecutive layers in the encoder, decoder, and skip connections resulting in large number of parameters. Additionally, for better performance, they often be pretrained on a larger data, thus requiring large memory size and increasing resource expenses. In this study, we propose a new lightweight but efficient model, namely LiteNeXt, based on convolutions and mixing modules with simplified decoder, for medical image segmentation. The model is trained from scratch with small amount of parameters (0.71M) and Giga Floating Point Operations Per Second (0.42). To handle boundary fuzzy as well as occlusion or clutter in objects especially in medical image regions, we propose the Marginal Weight Loss that can help effectively determine the marginal boundary between object and background. Additionally, the Self-embedding Representation Parallel technique is proposed as an innovative data augmentation strategy that utilizes the network architecture itself for self-learning augmentation, enhancing feature extraction robustness without external data. Experiments on public datasets including Data Science Bowls, GlaS, ISIC2018, PH2, Sunnybrook, and Lung X-ray data show promising results compared to other state-of-the-art CNN-based and Transformer-based architectures. Our code is released at: https://github.com/tranngocduvnvp/LiteNeXt.