MedNNS: Supernet-based Medical Task-Adaptive Neural Network Search

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Joint optimization of model architecture selection and weight initialization remains challenging in medical imaging task adaptation. Method: This paper proposes the first neural architecture search (NAS) framework tailored for healthcare, featuring a domain-specific meta-space that jointly optimizes the supernet architecture and initialization strategy. It introduces Rank Loss and Fréchet Inception Distance (FID) loss to enhance cross-model and cross-dataset representation alignment. The supernet scale is 51× larger than state-of-the-art NAS methods. Contribution/Results: Evaluated across multiple medical imaging benchmarks, the framework achieves an average accuracy gain of 1.7% over prior approaches, significantly accelerates convergence, and outperforms both ImageNet-based transfer learning and existing NAS methods—demonstrating superior generalization and efficiency in medical domain adaptation.

Technology Category

Application Category

📝 Abstract
Deep learning (DL) has achieved remarkable progress in the field of medical imaging. However, adapting DL models to medical tasks remains a significant challenge, primarily due to two key factors: (1) architecture selection, as different tasks necessitate specialized model designs, and (2) weight initialization, which directly impacts the convergence speed and final performance of the models. Although transfer learning from ImageNet is a widely adopted strategy, its effectiveness is constrained by the substantial differences between natural and medical images. To address these challenges, we introduce Medical Neural Network Search (MedNNS), the first Neural Network Search framework for medical imaging applications. MedNNS jointly optimizes architecture selection and weight initialization by constructing a meta-space that encodes datasets and models based on how well they perform together. We build this space using a Supernetwork-based approach, expanding the model zoo size by 51x times over previous state-of-the-art (SOTA) methods. Moreover, we introduce rank loss and Fr'echet Inception Distance (FID) loss into the construction of the space to capture inter-model and inter-dataset relationships, thereby achieving more accurate alignment in the meta-space. Experimental results across multiple datasets demonstrate that MedNNS significantly outperforms both ImageNet pre-trained DL models and SOTA Neural Architecture Search (NAS) methods, achieving an average accuracy improvement of 1.7% across datasets while converging substantially faster. The code and the processed meta-space is available at https://github.com/BioMedIA-MBZUAI/MedNNS.
Problem

Research questions and friction points this paper is trying to address.

Adapting DL models to diverse medical imaging tasks
Optimizing architecture selection and weight initialization jointly
Addressing domain gap between natural and medical images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supernetwork-based joint architecture and weight optimization
Meta-space encoding with rank and FID loss
51x model zoo expansion over SOTA
🔎 Similar Papers
No similar papers found.