🤖 AI Summary
Addressing the clinical challenge of insufficient interpretability under scarce labeled data in early pulmonary nodule diagnosis, this paper proposes a multimodal, multi-scale self-explaining model. Methodologically, we introduce a novel four-level self-explanation mechanism—semantic clustering, case-based retrieval, attention visualization, and critical attribute disentanglement—integrated with Vision Transformer-based self-supervised pretraining, sparsely supervised hierarchical prediction, cross-modal feature alignment, and a latent-space semi-supervised active learning framework. Evaluated on the LIDC dataset, our model achieves 92.3% accuracy using only 1% labeled data, surpassing fully supervised state-of-the-art methods. Radiologists validate the clinical credibility of its explanations through expert assessment. The code is publicly released, and extensive experiments demonstrate robust cross-center generalization.
📝 Abstract
Lung cancer, a leading cause of cancer-related deaths globally, emphasises the importance of early detection for better patient outcomes. Pulmonary nodules, often early indicators of lung cancer, necessitate accurate, timely diagnosis. Despite Explainable Artificial Intelligence (XAI) advances, many existing systems struggle providing clear, comprehensive explanations, especially with limited labelled data. This study introduces MERA, a Multimodal and Multiscale self-Explanatory model designed for lung nodule diagnosis with considerably Reduced Annotation requirements. MERA integrates unsupervised and weakly supervised learning strategies (self-supervised learning techniques and Vision Transformer architecture for unsupervised feature extraction) and a hierarchical prediction mechanism leveraging sparse annotations via semi-supervised active learning in the learned latent space. MERA explains its decisions on multiple levels: model-level global explanations via semantic latent space clustering, instance-level case-based explanations showing similar instances, local visual explanations via attention maps, and concept explanations using critical nodule attributes. Evaluations on the public LIDC dataset show MERA's superior diagnostic accuracy and self-explainability. With only 1% annotated samples, MERA achieves diagnostic accuracy comparable to or exceeding state-of-the-art methods requiring full annotation. The model's inherent design delivers comprehensive, robust, multilevel explanations aligned closely with clinical practice, enhancing trustworthiness and transparency. Demonstrated viability of unsupervised and weakly supervised learning lowers the barrier to deploying diagnostic AI in broader medical domains. Our complete code is open-source available: https://github.com/diku-dk/credanno.