π€ AI Summary
To address the challenge of balancing long-range contextual modeling and computational efficiency in multimodal medical image synthesis, this paper proposes the first lightweight adversarial synthesis framework based on selective state space models (SSMs). The core method introduces a channel-mixing Mamba (cmMamba) module, embedded within the bottleneck layer of a CNN backbone, to jointly model long-range dependencies across both spatial and channel dimensions. This work is the first to adapt SSMs to cross-modal medical image synthesis, achieving a favorable trade-off between global contextual awareness and local fidelity. Leveraging a convolutional adversarial architecture and a joint training paradigm on multi-contrast MRI and MRIβCT datasets, the framework achieves significant improvements over CNN- and Transformer-based baselines on MRI contrast completion and MRI-to-CT synthesis: +2.1 dB in PSNR, +0.032 in SSIM, and a 67% reduction in parameter count.
π Abstract
In recent years, deep learning models comprising transformer components have pushed the performance envelope in medical image synthesis tasks. Contrary to convolutional neural networks (CNNs) that use static, local filters, transformers use self-attention mechanisms to permit adaptive, non-local filtering to sensitively capture long-range context. However, this sensitivity comes at the expense of substantial model complexity, which can compromise learning efficacy particularly on relatively modest-sized imaging datasets. Here, we propose a novel adversarial model for multi-modal medical image synthesis, I2I-Mamba, that leverages selective state space modeling (SSM) to efficiently capture long-range context while maintaining local precision. To do this, I2I-Mamba injects channel-mixed Mamba (cmMamba) blocks in the bottleneck of a convolutional backbone. In cmMamba blocks, SSM layers are used to learn context across the spatial dimension and channel-mixing layers are used to learn context across the channel dimension of feature maps. Comprehensive demonstrations are reported for imputing missing images in multi-contrast MRI and MRI-CT protocols. Our results indicate that I2I-Mamba offers superior performance against state-of-the-art CNN- and transformer-based methods in synthesizing target-modality images.