Self-Supervised Ultrasound-Video Segmentation with Feature Prediction and 3D Localised Loss

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Poor ultrasound image quality and high annotation costs pose dual challenges of data scarcity and label noise for supervised learning. To address this, we propose the first self-supervised segmentation framework specifically designed for ultrasound video, introducing V-JEPA—the first application of this vision foundation model to medical imaging—and innovatively designing a 3D localization auxiliary task to enhance spatiotemporal local modeling within Vision Transformers, avoiding both pixel-level reconstruction and negative-sample contrastive learning. Our method leverages only unlabeled ultrasound videos for pretraining and significantly improves few-shot segmentation performance: mDice increases by 8.35% with just 10% labeled data and by 3.4% under full supervision, outperforming existing self-supervised approaches. The core contribution lies in establishing a lightweight, computationally efficient self-supervised paradigm tailored to small-scale medical video datasets.

Technology Category

Application Category

📝 Abstract
Acquiring and annotating large datasets in ultrasound imaging is challenging due to low contrast, high noise, and susceptibility to artefacts. This process requires significant time and clinical expertise. Self-supervised learning (SSL) offers a promising solution by leveraging unlabelled data to learn useful representations, enabling improved segmentation performance when annotated data is limited. Recent state-of-the-art developments in SSL for video data include V-JEPA, a framework solely based on feature prediction, avoiding pixel level reconstruction or negative samples. We hypothesise that V-JEPA is well-suited to ultrasound imaging, as it is less sensitive to noisy pixel-level detail while effectively leveraging temporal information. To the best of our knowledge, this is the first study to adopt V-JEPA for ultrasound video data. Similar to other patch-based masking SSL techniques such as VideoMAE, V-JEPA is well-suited to ViT-based models. However, ViTs can underperform on small medical datasets due to lack of inductive biases, limited spatial locality and absence of hierarchical feature learning. To improve locality understanding, we propose a novel 3D localisation auxiliary task to improve locality in ViT representations during V-JEPA pre-training. Our results show V-JEPA with our auxiliary task improves segmentation performance significantly across various frozen encoder configurations, with gains up to 3.4% using 100% and up to 8.35% using only 10% of the training data.
Problem

Research questions and friction points this paper is trying to address.

Addresses ultrasound data annotation challenges via self-supervised learning
Improves ViT locality for ultrasound segmentation with 3D auxiliary tasks
Enhances segmentation performance using limited labeled ultrasound video data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning with feature prediction
3D localisation auxiliary task for ViTs
V-JEPA framework for ultrasound video segmentation
🔎 Similar Papers
2024-07-08International Conference on Medical Image Computing and Computer-Assisted InterventionCitations: 4