🤖 AI Summary
Multimodal brain tumor segmentation requires precise identification of internal subregions; however, existing prompt-based methods neglect cross-modal correlations and rely on manually designed, class-specific prompts, limiting generalizability and clinical applicability. To address this, we propose a generic prompt-driven memory-augmented framework. Our approach introduces a novel modality-slice memory attention (MSMA) mechanism to jointly model cross-modal and inter-slice dependencies; designs a multi-scale class-agnostic prompt encoder (MCP-Encoder) for robust guidance from arbitrary prompts (e.g., points or bounding boxes); and incorporates a modality-adaptive fusion decoder (MF-Decoder) to enhance collaborative feature representation. Evaluated on multiple MRI datasets, our method significantly outperforms state-of-the-art approaches, achieving top performance in both metastasis and glioma segmentation tasks. The source code is publicly available.
📝 Abstract
Multi-modal brain tumor segmentation is critical for clinical diagnosis, and it requires accurate identification of distinct internal anatomical subregions. While the recent prompt-based segmentation paradigms enable interactive experiences for clinicians, existing methods ignore cross-modal correlations and rely on labor-intensive category-specific prompts, limiting their applicability in real-world scenarios. To address these issues, we propose a MSM-Seg framework for multi-modal brain tumor segmentation. The MSM-Seg introduces a novel dual-memory segmentation paradigm that synergistically integrates multi-modal and inter-slice information with the efficient category-agnostic prompt for brain tumor understanding. To this end, we first devise a modality-and-slice memory attention (MSMA) to exploit the cross-modal and inter-slice relationships among the input scans. Then, we propose a multi-scale category-agnostic prompt encoder (MCP-Encoder) to provide tumor region guidance for decoding. Moreover, we devise a modality-adaptive fusion decoder (MF-Decoder) that leverages the complementary decoding information across different modalities to improve segmentation accuracy. Extensive experiments on different MRI datasets demonstrate that our MSM-Seg framework outperforms state-of-the-art methods in multi-modal metastases and glioma tumor segmentation. The code is available at https://github.com/xq141839/MSM-Seg.