Towards Scalable Language-Image Pre-training for 3D Medical Imaging

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational bottleneck hindering large-scale unsupervised clinical data utilization in 3D medical imaging (CT/MRI) language–image pretraining, this paper proposes HLIP—a lightweight hierarchical language–image pretraining framework. HLIP introduces a hierarchical attention mechanism explicitly modeling the intrinsic slice–scan–study structure of radiological data, integrated with 3D convolutional feature encoding, multimodal contrastive learning, and cross-modal alignment optimization. It is the first framework enabling end-to-end pretraining directly on massive, uncurated clinical data (>460K patients). Evaluated on benchmarks including Pub-Brain-5, RSNA, CQ500, and Rad-ChestCT, HLIP achieves a 32.4% improvement in balanced accuracy and up to a 6.9% gain in macro-AUC. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Language-image pre-training has demonstrated strong performance in 2D medical imaging, but its success in 3D modalities such as CT and MRI remains limited due to the high computational demands of volumetric data, which pose a significant barrier to training on large-scale, uncurated clinical studies. In this study, we introduce Hierarchical attention for Language-Image Pre-training (HLIP), a scalable pre-training framework for 3D medical imaging. HLIP adopts a lightweight hierarchical attention mechanism inspired by the natural hierarchy of radiology data: slice, scan, and study. This mechanism exhibits strong generalizability, e.g., +4.3% macro AUC on the Rad-ChestCT benchmark when pre-trained on CT-RATE. Moreover, the computational efficiency of HLIP enables direct training on uncurated datasets. Trained on 220K patients with 3.13 million scans for brain MRI and 240K patients with 1.44 million scans for head CT, HLIP achieves state-of-the-art performance, e.g., +32.4% balanced ACC on the proposed publicly available brain MRI benchmark Pub-Brain-5; +1.4% and +6.9% macro AUC on head CT benchmarks RSNA and CQ500, respectively. These results demonstrate that, with HLIP, directly pre-training on uncurated clinical datasets is a scalable and effective direction for language-image pre-training in 3D medical imaging. The code is available at https://github.com/Zch0414/hlip
Problem

Research questions and friction points this paper is trying to address.

Scalable language-image pre-training for 3D medical imaging
Overcoming computational demands of volumetric CT/MRI data
Enhancing performance on uncurated clinical datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical attention for 3D medical imaging
Lightweight hierarchical attention mechanism
Scalable pre-training on uncurated datasets
🔎 Similar Papers
No similar papers found.