π€ AI Summary
This work addresses diagnostic errors and hallucinations in clinical oncology analysis, which often arise from the absence of traceable, multimodal reasoning mechanisms. To this end, we propose TumorChain, a novel framework that introduces TumorCoTβthe first large-scale multimodal chain-of-thought reasoning benchmark for oncology. By integrating 3D medical imaging, clinical text, and organ-level vision-language alignment through interleaved causal reasoning, TumorChain enables interpretable and traceable analysis from radiological findings to pathological predictions. The method combines 3D image encoding, clinical semantic understanding, and self-optimized multi-turn reasoning, significantly outperforming strong baselines in lesion detection, clinical impression generation, and pathological classification. Moreover, it demonstrates superior generalization performance on the DeepTumorVQA benchmark.
π Abstract
Accurate tumor analysis is central to clinical radiology and precision oncology, where early detection, reliable lesion characterization, and pathology-level risk assessment guide diagnosis and treatment planning. Chain-of-Thought (CoT) reasoning is particularly important in this setting because it enables step-by-step interpretation from imaging findings to clinical impressions and pathology conclusions, improving traceability and reducing diagnostic errors. Here, we target the clinical tumor analysis task and build a large-scale benchmark that operationalizes a multimodal reasoning pipeline, spanning findings, impressions, and pathology predictions. We curate TumorCoT, a large-scale dataset of 1.5M CoT-labeled VQA instructions paired with 3D CT scans, with step-aligned rationales and cross-modal alignments along the trajectory from findings to impression to pathology, enabling evaluation of both answer accuracy and reasoning consistency. We further propose TumorChain, a multimodal interleaved reasoning framework that tightly couples 3D imaging encoders, clinical text understanding, and organ-level vision-language alignment. Through cross-modal alignment and iterative interleaved causal reasoning, TumorChain grounds visual evidence, aggregates conclusions, and issues pathology predictions after multiple rounds of self-refinement, improving traceability and reducing hallucination risk. Experiments show consistent improvements over strong baselines in lesion detection, impression generation, and pathology classification, and demonstrate strong generalization on the DeepTumorVQA benchmark. These results highlight the potential of multimodal reasoning for reliable and interpretable tumor analysis in clinical practice. Detailed information about our project can be found on our project homepage at https://github.com/ZJU4HealthCare/TumorChain.