🤖 AI Summary
This study addresses the critical limitation of current multimodal large language models (MLLMs) in functional imaging—particularly positron emission tomography (PET)—where inadequate perceptual capabilities hinder the disentanglement of tracer distribution from anatomical structures, often leading to diagnostic hallucinations. To bridge this gap, the work presents the first systematic characterization and quantification of this perceptual deficit and introduces PET-Bench, the first large-scale, multi-center, multi-tracer benchmark comprising 52,308 hierarchically structured question-answer pairs. Furthermore, the authors propose the Atomic Visual Alignment (AVA) training paradigm, which aligns chain-of-thought reasoning with visual evidence by leveraging low-level functional perception to guide high-level inference. Experimental results demonstrate that AVA substantially improves diagnostic accuracy—by up to 14.83%—while effectively mitigating hallucinations, thereby advancing safe and reliable MLLM-based understanding and reasoning in functional medical imaging.
📝 Abstract
While Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in tasks such as abnormality detection and report generation for anatomical modalities, their capability in functional imaging remains largely unexplored. In this work, we identify and quantify a fundamental functional perception gap: the inability of current vision encoders to decode functional tracer biodistribution independent of morphological priors. Identifying Positron Emission Tomography (PET) as the quintessential modality to investigate this disconnect, we introduce PET-Bench, the first large-scale functional imaging benchmark comprising 52,308 hierarchical QA pairs from 9,732 multi-site, multi-tracer PET studies. Extensive evaluation of 19 state-of-the-art MLLMs reveals a critical safety hazard termed the Chain-of-Thought (CoT) hallucination trap. We observe that standard CoT prompting, widely considered to enhance reasoning, paradoxically decouples linguistic generation from visual evidence in PET, producing clinically fluent but factually ungrounded diagnoses. To resolve this, we propose Atomic Visual Alignment (AVA), a simple fine-tuning strategy that enforces the mastery of low-level functional perception prior to high-level diagnostic reasoning. Our results demonstrate that AVA effectively bridges the perception gap, transforming CoT from a source of hallucination into a robust inference tool and improving diagnostic accuracy by up to 14.83%. Code and data are available at https://github.com/yezanting/PET-Bench.