I2I-Mamba: Multi-modal medical image synthesis via selective state space modeling

πŸ“… 2024-05-22
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 16
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of balancing long-range contextual modeling and computational efficiency in multimodal medical image synthesis, this paper proposes the first lightweight adversarial synthesis framework based on selective state space models (SSMs). The core method introduces a channel-mixing Mamba (cmMamba) module, embedded within the bottleneck layer of a CNN backbone, to jointly model long-range dependencies across both spatial and channel dimensions. This work is the first to adapt SSMs to cross-modal medical image synthesis, achieving a favorable trade-off between global contextual awareness and local fidelity. Leveraging a convolutional adversarial architecture and a joint training paradigm on multi-contrast MRI and MRI–CT datasets, the framework achieves significant improvements over CNN- and Transformer-based baselines on MRI contrast completion and MRI-to-CT synthesis: +2.1 dB in PSNR, +0.032 in SSIM, and a 67% reduction in parameter count.

Technology Category

Application Category

πŸ“ Abstract
In recent years, deep learning models comprising transformer components have pushed the performance envelope in medical image synthesis tasks. Contrary to convolutional neural networks (CNNs) that use static, local filters, transformers use self-attention mechanisms to permit adaptive, non-local filtering to sensitively capture long-range context. However, this sensitivity comes at the expense of substantial model complexity, which can compromise learning efficacy particularly on relatively modest-sized imaging datasets. Here, we propose a novel adversarial model for multi-modal medical image synthesis, I2I-Mamba, that leverages selective state space modeling (SSM) to efficiently capture long-range context while maintaining local precision. To do this, I2I-Mamba injects channel-mixed Mamba (cmMamba) blocks in the bottleneck of a convolutional backbone. In cmMamba blocks, SSM layers are used to learn context across the spatial dimension and channel-mixing layers are used to learn context across the channel dimension of feature maps. Comprehensive demonstrations are reported for imputing missing images in multi-contrast MRI and MRI-CT protocols. Our results indicate that I2I-Mamba offers superior performance against state-of-the-art CNN- and transformer-based methods in synthesizing target-modality images.
Problem

Research questions and friction points this paper is trying to address.

Synthesize multi-modal medical images via selective state space modeling
Address long-range context sensitivity in image synthesis networks
Improve radial coverage and angular isotropy in contextual learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses dual-domain Mamba blocks for contextual modeling
Leverages spiral-scan SSM operators for enhanced coverage
Combines convolutional layers with SSM for spatial precision
πŸ”Ž Similar Papers
No similar papers found.