SegResMamba: An Efficient Architecture for 3D Medical Image Segmentation

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost, excessive GPU memory consumption, prolonged training time, and substantial carbon footprint of Transformer-based models in 3D medical image segmentation, this work proposes the first integration of Structured State Space Models (SSMs) with deep residual connections to construct a lightweight 3D encoder-decoder architecture. Key innovations include channel-adaptive normalization, multi-scale feature recalibration, and progressive downsampling. Evaluated on the BTCV and Medical Segmentation Decathlon benchmarks, our method achieves state-of-the-art Dice scores. It reduces training GPU memory usage by over 50% and accelerates inference by 2.3× compared to strong baselines. This work significantly enhances both efficiency and environmental sustainability of 3D medical segmentation models, establishing a new paradigm for green AI in clinical applications.

Technology Category

Application Category

📝 Abstract
The Transformer architecture has opened a new paradigm in the domain of deep learning with its ability to model long-range dependencies and capture global context and has outpaced the traditional Convolution Neural Networks (CNNs) in many aspects. However, applying Transformer models to 3D medical image datasets presents significant challenges due to their high training time, and memory requirements, which not only hinder scalability but also contribute to elevated CO$_2$ footprint. This has led to an exploration of alternative models that can maintain or even improve performance while being more efficient and environmentally sustainable. Recent advancements in Structured State Space Models (SSMs) effectively address some of the inherent limitations of Transformers, particularly their high memory and computational demands. Inspired by these advancements, we propose an efficient 3D segmentation model for medical imaging called SegResMamba, designed to reduce computation complexity, memory usage, training time, and environmental impact while maintaining high performance. Our model uses less than half the memory during training compared to other state-of-the-art (SOTA) architectures, achieving comparable performance with significantly reduced resource demands.
Problem

Research questions and friction points this paper is trying to address.

Addresses high memory and computational demands in 3D medical image segmentation.
Reduces training time and environmental impact of deep learning models.
Proposes SegResMamba for efficient, high-performance medical image analysis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

SegResMamba reduces memory usage significantly
Uses Structured State Space Models for efficiency
Maintains high performance with lower resource demands
🔎 Similar Papers
No similar papers found.