Learning Generalizable 3D Medical Image Representations from Mask-Guided Self-Supervision

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised methods for 3D medical imaging struggle to capture anatomical semantics, limiting their transferability to downstream tasks. To address this, this work proposes MASS, a novel approach that introduces mask-guided contextual segmentation as a self-supervised pretraining task. By leveraging automatically generated class-agnostic masks, MASS learns generalizable semantic representations from large-scale unlabeled 3D medical images, effectively integrating structural, appearance, and spatial relationships. Combining a multimodal 3D convolutional network with a frozen encoder fine-tuning strategy, the method achieves Dice scores over 20% higher than baseline approaches under low-label regimes. Notably, it matches the performance of fully supervised models using only 20–40% of the annotated data and attains comparable accuracy on unseen pathological classification tasks.

Technology Category

Application Category

📝 Abstract
Foundation models have transformed vision and language by learning general-purpose representations from large-scale unlabeled data, yet 3D medical imaging lacks analogous approaches. Existing self-supervised methods rely on low-level reconstruction or contrastive objectives that fail to capture the anatomical semantics critical for medical image analysis, limiting transfer to downstream tasks. We present MASS (MAsk-guided Self-Supervised learning), which treats in-context segmentation as the pretext task for learning general-purpose medical imaging representations. MASS's key insight is that automatically generated class-agnostic masks provide sufficient structural supervision for learning semantically rich representations. By training on thousands of diverse mask proposals spanning anatomical structures and pathological findings, MASS learns what semantically defines medical structures: the holistic combination of appearance, shape, spatial context, and anatomical relationships. We demonstrate effectiveness across data regimes: from small-scale pretraining on individual datasets (20-200 scans) to large-scale multi-modal pretraining on 5K CT, MRI, and PET volumes, all without annotations. MASS demonstrates: (i) few-shot segmentation on novel structures, (ii) matching full supervision with only 20-40\% labeled data while outperforming self-supervised baselines by over 20 in Dice score in low-data regimes, and (iii) frozen-encoder classification on unseen pathologies that matches full supervised training with thousands of samples. Mask-guided self-supervised pretraining captures broadly generalizable knowledge, opening a path toward 3D medical imaging foundation models without expert annotations. Code is available: https://github.com/Stanford-AIMI/MASS.
Problem

Research questions and friction points this paper is trying to address.

3D medical imaging
self-supervised learning
anatomical semantics
foundation models
generalizable representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

mask-guided self-supervision
3D medical foundation model
annotation-free representation learning
in-context segmentation
generalizable medical imaging
🔎 Similar Papers
No similar papers found.