Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation

๐Ÿ“… 2025-10-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Implicit Neural Representations (INRs) are inherently ill-suited for medical image segmentation due to their difficulty in capturing semantic structures. To address this, we propose MetaSegโ€”the first lightweight segmentation framework that deeply integrates meta-learning with INRs. MetaSeg employs a unified implicit network that jointly regresses pixel intensities and semantic labels, enabling end-to-end segmentation decoding via fine-tuning in just a few optimization steps. Its core innovation lies in meta-optimization to learn a task-agnostic initialization, allowing rapid adaptation to new images with minimal supervision. Evaluated on 2D and 3D brain MRI datasets, MetaSeg achieves Dice scores comparable to U-Net while reducing model parameters by 90% (retaining only 10% of U-Netโ€™s parameter count), significantly improving computational efficiency and scalability. This work establishes a novel, efficient paradigm for medical image segmentation.

Technology Category

Application Category

๐Ÿ“ Abstract
Implicit neural representations (INRs) have achieved remarkable successes in learning expressive yet compact signal representations. However, they are not naturally amenable to predictive tasks such as segmentation, where they must learn semantic structures over a distribution of signals. In this study, we introduce MetaSeg, a meta-learning framework to train INRs for medical image segmentation. MetaSeg uses an underlying INR that simultaneously predicts per pixel intensity values and class labels. It then uses a meta-learning procedure to find optimal initial parameters for this INR over a training dataset of images and segmentation maps, such that the INR can simply be fine-tuned to fit pixels of an unseen test image, and automatically decode its class labels. We evaluated MetaSeg on 2D and 3D brain MRI segmentation tasks and report Dice scores comparable to commonly used U-Net models, but with $90%$ fewer parameters. MetaSeg offers a fresh, scalable alternative to traditional resource-heavy architectures such as U-Nets and vision transformers for medical image segmentation. Our project is available at https://kushalvyas.github.io/metaseg.html .
Problem

Research questions and friction points this paper is trying to address.

Meta-learned implicit networks for medical image segmentation
Training INRs to predict pixel intensities and class labels
Finding optimal initial parameters for fast fine-tuning on unseen images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learned implicit networks for segmentation
Simultaneously predicts pixel intensity and labels
Fine-tunes with 90% fewer parameters than U-Net
๐Ÿ”Ž Similar Papers
No similar papers found.