PETAR: Localized Findings Generation with Mask-Aware Vision-Language Modeling for PET Automated Reporting

📅 2025-10-31
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
To address cross-modal understanding challenges in automatic radiology report generation from 3D PET/CT volumes—stemming from large-scale data, sparse and subtle lesions, and lengthy textual reports—this work proposes PETAR-4B, the first lesion-level spatially aligned 3D mask-aware vision-language model. PETAR-4B jointly encodes PET, CT, and lesion segmentation masks to enable fine-grained lesion perception and holistic contextual modeling. We construct a large-scale, lesion-level annotated dataset comprising over 10,000 cases, leveraging hybrid rule-based and LLM-assisted methods to generate high-quality image–text pairs. Quantitative evaluation and clinical expert assessment both demonstrate significant improvements in report accuracy, spatial localization precision, and clinical consistency. To our knowledge, this is the first work achieving “see-as-reported” precise lesion localization in 3D PET/CT imaging, advancing medical vision-language understanding toward 3D-aware, interpretable, and clinically deployable systems.

Technology Category

Application Category

📝 Abstract
Recent advances in vision-language models (VLMs) have enabled impressive multimodal reasoning, yet most medical applications remain limited to 2D imaging. In this work, we extend VLMs to 3D positron emission tomography and computed tomography (PET/CT), a domain characterized by large volumetric data, small and dispersed lesions, and lengthy radiology reports. We introduce a large-scale dataset comprising over 11,000 lesion-level descriptions paired with 3D segmentations from more than 5,000 PET/CT exams, extracted via a hybrid rule-based and large language model (LLM) pipeline. Building upon this dataset, we propose PETAR-4B, a 3D mask-aware vision-language model that integrates PET, CT, and lesion contours for spatially grounded report generation. PETAR bridges global contextual reasoning with fine-grained lesion awareness, producing clinically coherent and localized findings. Comprehensive automated and human evaluations demonstrate that PETAR substantially improves PET/CT report generation quality, advancing 3D medical vision-language understanding.
Problem

Research questions and friction points this paper is trying to address.

Extending vision-language models to 3D PET/CT medical imaging
Addressing challenges of large volumetric data and dispersed lesions
Generating clinically coherent localized findings for automated reporting
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D mask-aware vision-language model for PET/CT
Integrates PET, CT and lesion contour data
Generates spatially grounded radiology reports
🔎 Similar Papers
D
Danyal Maqbool
University of Wisconsin–Madison, Madison, WI, USA
Changhee Lee
Changhee Lee
Assistant Professor, Korea University
Machine LearningDeep LearningAI in Medicine
Z
Zachary Huemann
University of Wisconsin–Madison, Madison, WI, USA
S
Samuel D. Church
University of Wisconsin–Madison, Madison, WI, USA
M
Matthew E. Larson
University of Wisconsin–Madison, Madison, WI, USA
S
Scott B. Perlman
University of Wisconsin–Madison, Madison, WI, USA
T
Tomas A. Romero
University of Wisconsin–Madison, Madison, WI, USA
J
Joshua D. Warner
University of Wisconsin–Madison, Madison, WI, USA
M
Meghan Lubner
University of Wisconsin–Madison, Madison, WI, USA
X
Xin Tie
University of Wisconsin–Madison, Madison, WI, USA
Jameson Merkow
Jameson Merkow
PhD from University of Calinfornia San Diego
Machine LearningComputer VisionImage DenoisingObject DetectionSignal Analysis
J
Junjie Hu
University of Wisconsin–Madison, Madison, WI, USA
S
Steve Y. Cho
University of Wisconsin–Madison, Madison, WI, USA
Tyler J. Bradshaw
Tyler J. Bradshaw
Associate Professor, University of Wisconsin - Madison
Machine learningnuclear medicinelarge language modelsmultimodal vision-language