🤖 AI Summary
To address poor image quality and scarce annotations in ultrasound image segmentation—which limit model performance and induce overfitting—this work pioneers the integration of Neural Architecture Search (NAS) into the Vision Transformer framework. We propose a token-level multi-scale architecture search mechanism to automatically optimize feature extraction, and further design an NAS-guided staged semi-supervised learning framework that incorporates network independence constraints and contrastive learning to enhance robustness under limited labeling. The method jointly models local anatomical details and global contextual dependencies, effectively mitigating overfitting in data-scarce regimes. Evaluated on multiple public ultrasound benchmarks, our approach achieves state-of-the-art performance, surpassing fully supervised baselines using only 10% labeled data, and demonstrates promising cross-modality transferability.
📝 Abstract
Accurate segmentation of ultrasound images is essential for reliable medical diagnoses but is challenged by poor image quality and scarce labeled data. Prior approaches have relied on manually designed, complex network architectures to improve multi-scale feature extraction. However, such handcrafted models offer limited gains when prior knowledge is inadequate and are prone to overfitting on small datasets. In this paper, we introduce DeNAS-ViT, a data-efficient NAS-optimized Vision Transformer, the first method to leverage neural architecture search (NAS) for ultrasound image segmentation by automatically optimizing model architecture through token-level search. Specifically, we propose an efficient NAS module that performs multi-scale token search prior to the ViT's attention mechanism, effectively capturing both contextual and local features while minimizing computational costs. Given ultrasound's data scarcity and NAS's inherent data demands, we further develop a NAS-guided semi-supervised learning (SSL) framework. This approach integrates network independence and contrastive learning within a stage-wise optimization strategy, significantly enhancing model robustness under limited-data conditions. Extensive experiments on public datasets demonstrate that DeNAS-ViT achieves state-of-the-art performance, maintaining robustness with minimal labeled data. Moreover, we highlight DeNAS-ViT's generalization potential beyond ultrasound imaging, underscoring its broader applicability.