🤖 AI Summary
Existing multimodal large language models (MLLMs) predominantly operate on 2D medical images, limiting their capacity to model complex 3D anatomical structures and thereby increasing risks of pathological misdiagnosis and hallucination. To address this, we propose the first vision-language understanding framework tailored for 3D CT volumes. Our method introduces a dual 3D visual encoder to jointly capture global volumetric context and fine-grained anatomical details; a novel Spatial Packer multimodal projector—based on centroid-aware voxel compression—to efficiently align 3D visual representations with the LLM’s semantic space; and a two-stage training strategy comprising cross-modal alignment pretraining followed by joint 3D vision–language fine-tuning. Experiments demonstrate substantial improvements over state-of-the-art methods: +5.96% absolute gain in 3D cross-modal retrieval (R@100 = 39.85%), +8.01% in medical report generation (BLEU-4 = 24.01), and +1.99% in 3D visual question answering (primary-class accuracy = 73.60%). This work establishes a new paradigm for 3D AI-assisted medical diagnosis.
📝 Abstract
Automated 3D CT diagnosis empowers clinicians to make timely, evidence-based decisions by enhancing diagnostic accuracy and workflow efficiency. While multimodal large language models (MLLMs) exhibit promising performance in visual-language understanding, existing methods mainly focus on 2D medical images, which fundamentally limits their ability to capture complex 3D anatomical structures. This limitation often leads to misinterpretation of subtle pathologies and causes diagnostic hallucinations. In this paper, we present Hybrid Spatial Encoding Network (HSENet), a framework that exploits enriched 3D medical visual cues by effective visual perception and projection for accurate and robust vision-language understanding. Specifically, HSENet employs dual-3D vision encoders to perceive both global volumetric contexts and fine-grained anatomical details, which are pre-trained by dual-stage alignment with diagnostic reports. Furthermore, we propose Spatial Packer, an efficient multimodal projector that condenses high-resolution 3D spatial regions into a compact set of informative visual tokens via centroid-based compression. By assigning spatial packers with dual-3D vision encoders, HSENet can seamlessly perceive and transfer hybrid visual representations to LLM's semantic space, facilitating accurate diagnostic text generation. Experimental results demonstrate that our method achieves state-of-the-art performance in 3D language-visual retrieval (39.85% of R@100, +5.96% gain), 3D medical report generation (24.01% of BLEU-4, +8.01% gain), and 3D visual question answering (73.60% of Major Class Accuracy, +1.99% gain), confirming its effectiveness. Our code is available at https://github.com/YanzhaoShi/HSENet.