Vision-Language Model-Based Semantic-Guided Imaging Biomarker for Early Lung Cancer Detection

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing lung nodule malignancy risk prediction models rely on manual annotations, suffer from poor interpretability, and are highly sensitive to imaging variations, limiting their clinical generalizability. To address these limitations, we propose a semantic-guided radiomic biomarker modeling framework. We introduce the first end-to-end alignment of low-dose CT images with radiologist-defined semantic features—such as margin, density, and pleural traction—using CLIP, enabling interpretable predictions without inference-time annotation. Our approach combines parameter-efficient fine-tuning, multi-center data integration, and joint semantic embedding modeling to mitigate shortcut learning. Evaluated across multiple centers, our model achieves an AUROC of 0.90 and AUPRC of 0.78 for one-year lung cancer prediction, significantly outperforming baseline methods. Moreover, it accurately decodes key semantic indicators: margin (AUROC = 0.81), density (AUROC = 0.81), and pleural traction (AUROC = 0.84). The framework delivers high accuracy, robustness to imaging variability, and clinically meaningful interpretability.

Technology Category

Application Category

📝 Abstract
Objective: A number of machine learning models have utilized semantic features, deep features, or both to assess lung nodule malignancy. However, their reliance on manual annotation during inference, limited interpretability, and sensitivity to imaging variations hinder their application in real-world clinical settings. Thus, this research aims to integrate semantic features derived from radiologists' assessments of nodules, allowing the model to learn clinically relevant, robust, and explainable features for predicting lung cancer. Methods: We obtained 938 low-dose CT scans from the National Lung Screening Trial with 1,246 nodules and semantic features. The Lung Image Database Consortium dataset contains 1,018 CT scans, with 2,625 lesions annotated for nodule characteristics. Three external datasets were obtained from UCLA Health, the LUNGx Challenge, and the Duke Lung Cancer Screening. We finetuned a pretrained Contrastive Language-Image Pretraining model with a parameter-efficient fine-tuning approach to align imaging and semantic features and predict the one-year lung cancer diagnosis. Results: We evaluated the performance of the one-year diagnosis of lung cancer with AUROC and AUPRC and compared it to three state-of-the-art models. Our model demonstrated an AUROC of 0.90 and AUPRC of 0.78, outperforming baseline state-of-the-art models on external datasets. Using CLIP, we also obtained predictions on semantic features, such as nodule margin (AUROC: 0.81), nodule consistency (0.81), and pleural attachment (0.84), that can be used to explain model predictions. Conclusion: Our approach accurately classifies lung nodules as benign or malignant, providing explainable outputs, aiding clinicians in comprehending the underlying meaning of model predictions. This approach also prevents the model from learning shortcuts and generalizes across clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Integrates semantic features for robust lung cancer prediction
Reduces reliance on manual annotation and improves interpretability
Enhances generalization across diverse clinical imaging variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned CLIP model for feature alignment
Semantic-guided imaging biomarker for cancer detection
Explainable predictions with nodule characteristics
🔎 Similar Papers
No similar papers found.
L
Luoting Zhuang
S
Seyed Mohammad Hossein Tabatabaei
R
R. Salehi-Rad
L
Linh M. Tran
D
Denise R. Aberle
A
A.E. Prosper
William Hsu
William Hsu
Professor of Radiological Sciences and Bioengineering, Director of Medical Informatics Ph.D. at UCLA
Biomedical informaticsmachine learningimaging informaticscancer detection