Towards Label-Free Brain Tumor Segmentation: Unsupervised Learning with Multimodal MRI

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity, high cost, and inter-annotator inconsistency of manual annotations in brain tumor segmentation, this paper proposes a fully unsupervised framework. Our method trains a multimodal vision transformer-based autoencoder exclusively on healthy brain MRI scans and localizes abnormalities via reconstruction error maps. We introduce a novel early–late multimodal fusion mechanism to enhance cross-sequence feature integration and, for the first time, incorporate the Segment Anything Model (SAM) as a contour-refinement post-processing module for unsupervised medical image segmentation. Crucially, no tumor annotations are required during training or inference. Evaluated on the BraTS-GoAT 2025 benchmark, our approach achieves an 89.4% lesion detection rate and a whole-tumor Dice coefficient of 0.437—marking substantial improvements in localization accuracy and clinical applicability over prior unsupervised methods. This work establishes a new paradigm for intelligent, annotation-free辅助 diagnosis in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
Unsupervised anomaly detection (UAD) presents a complementary alternative to supervised learning for brain tumor segmentation in magnetic resonance imaging (MRI), particularly when annotated datasets are limited, costly, or inconsistent. In this work, we propose a novel Multimodal Vision Transformer Autoencoder (MViT-AE) trained exclusively on healthy brain MRIs to detect and localize tumors via reconstruction-based error maps. This unsupervised paradigm enables segmentation without reliance on manual labels, addressing a key scalability bottleneck in neuroimaging workflows. Our method is evaluated in the BraTS-GoAT 2025 Lighthouse dataset, which includes various types of tumors such as gliomas, meningiomas, and pediatric brain tumors. To enhance performance, we introduce a multimodal early-late fusion strategy that leverages complementary information across multiple MRI sequences, and a post-processing pipeline that integrates the Segment Anything Model (SAM) to refine predicted tumor contours. Despite the known challenges of UAD, particularly in detecting small or non-enhancing lesions, our method achieves clinically meaningful tumor localization, with lesion-wise Dice Similarity Coefficient of 0.437 (Whole Tumor), 0.316 (Tumor Core), and 0.350 (Enhancing Tumor) on the test set, and an anomaly Detection Rate of 89.4% on the validation set. These findings highlight the potential of transformer-based unsupervised models to serve as scalable, label-efficient tools for neuro-oncological imaging.
Problem

Research questions and friction points this paper is trying to address.

Enabling brain tumor segmentation without manual annotation requirements
Detecting diverse tumor types using unsupervised anomaly detection methods
Addressing scalability limitations in neuroimaging through label-free approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised Vision Transformer Autoencoder for tumor detection
Multimodal early-late fusion across MRI sequences
Segment Anything Model integration for contour refinement
🔎 Similar Papers
No similar papers found.