MSM-Seg: A Modality-and-Slice Memory Framework with Category-Agnostic Prompting for Multi-Modal Brain Tumor Segmentation

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal brain tumor segmentation requires precise identification of internal subregions; however, existing prompt-based methods neglect cross-modal correlations and rely on manually designed, class-specific prompts, limiting generalizability and clinical applicability. To address this, we propose a generic prompt-driven memory-augmented framework. Our approach introduces a novel modality-slice memory attention (MSMA) mechanism to jointly model cross-modal and inter-slice dependencies; designs a multi-scale class-agnostic prompt encoder (MCP-Encoder) for robust guidance from arbitrary prompts (e.g., points or bounding boxes); and incorporates a modality-adaptive fusion decoder (MF-Decoder) to enhance collaborative feature representation. Evaluated on multiple MRI datasets, our method significantly outperforms state-of-the-art approaches, achieving top performance in both metastasis and glioma segmentation tasks. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Multi-modal brain tumor segmentation is critical for clinical diagnosis, and it requires accurate identification of distinct internal anatomical subregions. While the recent prompt-based segmentation paradigms enable interactive experiences for clinicians, existing methods ignore cross-modal correlations and rely on labor-intensive category-specific prompts, limiting their applicability in real-world scenarios. To address these issues, we propose a MSM-Seg framework for multi-modal brain tumor segmentation. The MSM-Seg introduces a novel dual-memory segmentation paradigm that synergistically integrates multi-modal and inter-slice information with the efficient category-agnostic prompt for brain tumor understanding. To this end, we first devise a modality-and-slice memory attention (MSMA) to exploit the cross-modal and inter-slice relationships among the input scans. Then, we propose a multi-scale category-agnostic prompt encoder (MCP-Encoder) to provide tumor region guidance for decoding. Moreover, we devise a modality-adaptive fusion decoder (MF-Decoder) that leverages the complementary decoding information across different modalities to improve segmentation accuracy. Extensive experiments on different MRI datasets demonstrate that our MSM-Seg framework outperforms state-of-the-art methods in multi-modal metastases and glioma tumor segmentation. The code is available at https://github.com/xq141839/MSM-Seg.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-modal brain tumor segmentation challenges
Overcomes limitations of cross-modal correlation neglect
Eliminates need for labor-intensive category-specific prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-memory segmentation integrates multi-modal and inter-slice information
Multi-scale category-agnostic prompt encoder provides tumor guidance
Modality-adaptive fusion decoder leverages complementary cross-modal data
🔎 Similar Papers
No similar papers found.
Y
Yuxiang Luo
Graduate School of Information, Production and Systems, Waseda University, Japan
Q
Qing Xu
School of Computer Science, University of Lincoln, UK, with University of Nottingham, UK, and with University of Nottingham Ningbo China, China
H
Hai Huang
College of Electrical Engineering and Information, Northeast Agricultural University, Harbin, China
Yuqi Ouyang
Yuqi Ouyang
Sichuan University
Computer Vision
Z
Zhen Chen
Yale University, New Haven, CT 06510, USA
Wenting Duan
Wenting Duan
University of Lincoln
computer visionimage processingmedical imaging