🤖 AI Summary
Existing lung nodule malignancy risk prediction models rely on manual annotations, suffer from poor interpretability, and are highly sensitive to imaging variations, limiting their clinical generalizability. To address these limitations, we propose a semantic-guided radiomic biomarker modeling framework. We introduce the first end-to-end alignment of low-dose CT images with radiologist-defined semantic features—such as margin, density, and pleural traction—using CLIP, enabling interpretable predictions without inference-time annotation. Our approach combines parameter-efficient fine-tuning, multi-center data integration, and joint semantic embedding modeling to mitigate shortcut learning. Evaluated across multiple centers, our model achieves an AUROC of 0.90 and AUPRC of 0.78 for one-year lung cancer prediction, significantly outperforming baseline methods. Moreover, it accurately decodes key semantic indicators: margin (AUROC = 0.81), density (AUROC = 0.81), and pleural traction (AUROC = 0.84). The framework delivers high accuracy, robustness to imaging variability, and clinically meaningful interpretability.
📝 Abstract
Objective: A number of machine learning models have utilized semantic features, deep features, or both to assess lung nodule malignancy. However, their reliance on manual annotation during inference, limited interpretability, and sensitivity to imaging variations hinder their application in real-world clinical settings. Thus, this research aims to integrate semantic features derived from radiologists' assessments of nodules, allowing the model to learn clinically relevant, robust, and explainable features for predicting lung cancer. Methods: We obtained 938 low-dose CT scans from the National Lung Screening Trial with 1,246 nodules and semantic features. The Lung Image Database Consortium dataset contains 1,018 CT scans, with 2,625 lesions annotated for nodule characteristics. Three external datasets were obtained from UCLA Health, the LUNGx Challenge, and the Duke Lung Cancer Screening. We finetuned a pretrained Contrastive Language-Image Pretraining model with a parameter-efficient fine-tuning approach to align imaging and semantic features and predict the one-year lung cancer diagnosis. Results: We evaluated the performance of the one-year diagnosis of lung cancer with AUROC and AUPRC and compared it to three state-of-the-art models. Our model demonstrated an AUROC of 0.90 and AUPRC of 0.78, outperforming baseline state-of-the-art models on external datasets. Using CLIP, we also obtained predictions on semantic features, such as nodule margin (AUROC: 0.81), nodule consistency (0.81), and pleural attachment (0.84), that can be used to explain model predictions. Conclusion: Our approach accurately classifies lung nodules as benign or malignant, providing explainable outputs, aiding clinicians in comprehending the underlying meaning of model predictions. This approach also prevents the model from learning shortcuts and generalizes across clinical settings.