Unveiling and Bridging the Functional Perception Gap in MLLMs: Atomic Visual Alignment and Hierarchical Evaluation via PET-Bench

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical limitation of current multimodal large language models (MLLMs) in functional imaging—particularly positron emission tomography (PET)—where inadequate perceptual capabilities hinder the disentanglement of tracer distribution from anatomical structures, often leading to diagnostic hallucinations. To bridge this gap, the work presents the first systematic characterization and quantification of this perceptual deficit and introduces PET-Bench, the first large-scale, multi-center, multi-tracer benchmark comprising 52,308 hierarchically structured question-answer pairs. Furthermore, the authors propose the Atomic Visual Alignment (AVA) training paradigm, which aligns chain-of-thought reasoning with visual evidence by leveraging low-level functional perception to guide high-level inference. Experimental results demonstrate that AVA substantially improves diagnostic accuracy—by up to 14.83%—while effectively mitigating hallucinations, thereby advancing safe and reliable MLLM-based understanding and reasoning in functional medical imaging.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in tasks such as abnormality detection and report generation for anatomical modalities, their capability in functional imaging remains largely unexplored. In this work, we identify and quantify a fundamental functional perception gap: the inability of current vision encoders to decode functional tracer biodistribution independent of morphological priors. Identifying Positron Emission Tomography (PET) as the quintessential modality to investigate this disconnect, we introduce PET-Bench, the first large-scale functional imaging benchmark comprising 52,308 hierarchical QA pairs from 9,732 multi-site, multi-tracer PET studies. Extensive evaluation of 19 state-of-the-art MLLMs reveals a critical safety hazard termed the Chain-of-Thought (CoT) hallucination trap. We observe that standard CoT prompting, widely considered to enhance reasoning, paradoxically decouples linguistic generation from visual evidence in PET, producing clinically fluent but factually ungrounded diagnoses. To resolve this, we propose Atomic Visual Alignment (AVA), a simple fine-tuning strategy that enforces the mastery of low-level functional perception prior to high-level diagnostic reasoning. Our results demonstrate that AVA effectively bridges the perception gap, transforming CoT from a source of hallucination into a robust inference tool and improving diagnostic accuracy by up to 14.83%. Code and data are available at https://github.com/yezanting/PET-Bench.
Problem

Research questions and friction points this paper is trying to address.

functional perception gap
multimodal large language models
PET imaging
visual hallucination
diagnostic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional Perception Gap
Atomic Visual Alignment
PET-Bench
Chain-of-Thought Hallucination
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.
Zanting Ye
Zanting Ye
Southern Medical University
Deep learningMedical Imgae analysisVLM
X
Xiaolong Niu
School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
X
Xuanbin Wu
School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
X
Xu Han
School of Biomedical Engineering, Shanghai Jiaotong University, Shanghai, 200240, China
Shengyuan Liu
Shengyuan Liu
The Chinese University of Hong Kong; CASIA
Multimodal LearningGenerative modelsAI for HealthcareRadiomics
J
Jing Hao
Faculty of Dentistry, The University of Hong Kong, Hong 999077, China
Z
Zhihao Peng
Department of Electronic Engineering, Chinese University of Hong Kong, Hong Kong 518172, China
Hao Sun
Hao Sun
Southern Medical University
Biomedical Engineering
J
Jieqin Lv
Department of Nuclear Medicine, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou 510641, China
F
Fanghu Wang
PET Center, Department of Nuclear Medicine, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
Y
Yanchao Huang
Department of Nuclear Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
H
Hubing Wu
Department of Nuclear Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
Yixuan Yuan
Yixuan Yuan
Associate Professor in Chinese University of Hong Kong
Medical image analysisAI in healthcareBrain data analysisEndoscopy
H
Habib Zaidi
Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospitals, CH-1211 Geneva, Switzerland
Arman Rahmim
Arman Rahmim
Professor of Radiology, Physics and Biomedical Engineering, University of British Columbia
computational imagingmolecular imagingpersonalized cancer therapyAItheranostics
Yefeng Zheng
Yefeng Zheng
Professor, Westlake University, Hangzhou, China, IEEE Fellow, AIMBE Fellow
AI in HealthMedical ImagingComputer VisionNatural Language ProcessingLarge Language Model
L
Lijun Lu
School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China