Developing Foundation Models for Universal Segmentation from 3D Whole-Body Positron Emission Tomography

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of low anatomical contrast and high annotation costs in 3D whole-body PET imaging, which hinder the application of deep learning–based segmentation. To overcome these limitations, the authors construct the largest multi-center, multi-tracer 3D whole-body PET dataset to date and propose SegAnyPET—the first general-purpose foundation model tailored for this imaging modality. Built upon a 3D architecture with a prompt-driven mechanism, SegAnyPET enables zero-shot cross-task segmentation and facilitates efficient human-in-the-loop interaction. Experimental results demonstrate that SegAnyPET exhibits strong generalization across diverse disease scenarios, significantly enhancing the clinical applicability of molecular imaging.

Technology Category

Application Category

📝 Abstract
Positron emission tomography (PET) is a key nuclear medicine imaging modality that visualizes radiotracer distributions to quantify in vivo physiological and metabolic processes, playing an irreplaceable role in disease management. Despite its clinical importance, the development of deep learning models for quantitative PET image analysis remains severely limited, driven by both the inherent segmentation challenge from PET's paucity of anatomical contrast and the high costs of data acquisition and annotation. To bridge this gap, we develop generalist foundational models for universal segmentation from 3D whole-body PET imaging. We first build the largest and most comprehensive PET dataset to date, comprising 11041 3D whole-body PET scans with 59831 segmentation masks for model development. Based on this dataset, we present SegAnyPET, an innovative foundational model with general-purpose applicability to diverse segmentation tasks. Built on a 3D architecture with a prompt engineering strategy for mask generation, SegAnyPET enables universal and scalable organ and lesion segmentation, supports efficient human correction with minimal effort, and enables a clinical human-in-the-loop workflow. Extensive evaluations on multi-center, multi-tracer, multi-disease datasets demonstrate that SegAnyPET achieves strong zero-shot performance across a wide range of segmentation tasks, highlighting its potential to advance the clinical applications of molecular imaging.
Problem

Research questions and friction points this paper is trying to address.

PET segmentation
foundation models
anatomical contrast
data annotation
universal segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

foundation model
universal segmentation
3D PET imaging
prompt engineering
zero-shot learning
🔎 Similar Papers
No similar papers found.
Yichi Zhang
Yichi Zhang
Fudan University
Medical Image AnalysisFoundation ModelsAI4Medicine
Le Xue
Le Xue
Fudan University
AI for Medical Imaging
Wenbo Zhang
Wenbo Zhang
Shanghai Ocean University
Theoretical Computer Science
L
Lanlan Li
Human Phenome Institute, Fudan University, Shanghai, China.
Feiyang Xiao
Feiyang Xiao
Group of Intelligent Signal Processing (GISP), Harbin Engineering University
Detection and Classification of Acoustic Scenes and EventsAudio-Text Multi-Modality Learning
Yuchen Liu
Yuchen Liu
Fudan University
AI For ScienceMachine LearningBiomedical FM Model
X
Xiaohui Zhang
Department of Nuclear Medicine, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine, Chinese Academy of Sciences, Hangzhou, China.
Hongwei Zhang
Hongwei Zhang
Fudan University
GraphsAI4SMachine learning
S
Shuqi Wang
Human Phenome Institute, Fudan University, Shanghai, China.
G
Gang Feng
Shanghai Universal Medical Imaging Diagnostic Center, Shanghai, China.
L
Liling Peng
Shanghai Universal Medical Imaging Diagnostic Center, Shanghai, China.
Xin Gao
Xin Gao
Shanghai AI Laboratory & SJTU
ML、NLP、LLM
Yuanfan Xu
Yuanfan Xu
Tsinghua University
robotic computingmulti-agent systemsembodied AIDomain-specific Accelerator
Y
Yuan Qi
Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China.
Kuangyu Shi
Kuangyu Shi
University of Bern/Technical University of Munich
Nuclear medicine/Biomedical computing
H
Hong Zhang
Department of Nuclear Medicine, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine, Chinese Academy of Sciences, Hangzhou, China.
Y
Yuan Cheng
Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China.
M
Mei Tian
Shanghai Academy of Artificial Intelligence for Science, Shanghai, China.
Zixin Hu
Zixin Hu
Associate Professor, Fudan University