đ¤ AI Summary
To address cross-modal understanding challenges in automatic radiology report generation from 3D PET/CT volumesâstemming from large-scale data, sparse and subtle lesions, and lengthy textual reportsâthis work proposes PETAR-4B, the first lesion-level spatially aligned 3D mask-aware vision-language model. PETAR-4B jointly encodes PET, CT, and lesion segmentation masks to enable fine-grained lesion perception and holistic contextual modeling. We construct a large-scale, lesion-level annotated dataset comprising over 10,000 cases, leveraging hybrid rule-based and LLM-assisted methods to generate high-quality imageâtext pairs. Quantitative evaluation and clinical expert assessment both demonstrate significant improvements in report accuracy, spatial localization precision, and clinical consistency. To our knowledge, this is the first work achieving âsee-as-reportedâ precise lesion localization in 3D PET/CT imaging, advancing medical vision-language understanding toward 3D-aware, interpretable, and clinically deployable systems.
đ Abstract
Recent advances in vision-language models (VLMs) have enabled impressive multimodal reasoning, yet most medical applications remain limited to 2D imaging. In this work, we extend VLMs to 3D positron emission tomography and computed tomography (PET/CT), a domain characterized by large volumetric data, small and dispersed lesions, and lengthy radiology reports. We introduce a large-scale dataset comprising over 11,000 lesion-level descriptions paired with 3D segmentations from more than 5,000 PET/CT exams, extracted via a hybrid rule-based and large language model (LLM) pipeline. Building upon this dataset, we propose PETAR-4B, a 3D mask-aware vision-language model that integrates PET, CT, and lesion contours for spatially grounded report generation. PETAR bridges global contextual reasoning with fine-grained lesion awareness, producing clinically coherent and localized findings. Comprehensive automated and human evaluations demonstrate that PETAR substantially improves PET/CT report generation quality, advancing 3D medical vision-language understanding.