🤖 AI Summary
To address the limited generalizability and poor interpretability of Vision Transformers (ViTs) in few-shot medical image classification, this paper proposes Radiomics-Embedded Vision Transformer (RE-ViT), the first framework to perform end-to-end linear fusion of handcrafted radiomic features—such as Gray-Level Co-occurrence Matrix (GLCM) statistics—with patch embeddings in the early layers of ViT. This design seamlessly integrates domain-specific prior knowledge with data-driven representation learning, enhancing both model robustness and clinical interpretability. Evaluated on three public benchmarks—BUSI, ChestX-ray14, and Retinal OCT—RE-ViT achieves AUC scores of 0.950, 0.989, and 0.986, respectively, consistently outperforming state-of-the-art baselines including CNNs and TransMed. These results validate the efficacy and generalizability of radiomics-guided early fusion as a novel paradigm for medical ViT architectures.
📝 Abstract
Background: Deep learning has significantly advanced medical image analysis, with Vision Transformers (ViTs) offering a powerful alternative to convolutional models by modeling long-range dependencies through self-attention. However, ViTs are inherently data-intensive and lack domain-specific inductive biases, limiting their applicability in medical imaging. In contrast, radiomics provides interpretable, handcrafted descriptors of tissue heterogeneity but suffers from limited scalability and integration into end-to-end learning frameworks. In this work, we propose the Radiomics-Embedded Vision Transformer (RE-ViT) that combines radiomic features with data-driven visual embeddings within a ViT backbone. Purpose: To develop a hybrid RE-ViT framework that integrates radiomics and patch-wise ViT embeddings through early fusion, enhancing robustness and performance in medical image classification. Methods: Following the standard ViT pipeline, images were divided into patches. For each patch, handcrafted radiomic features were extracted and fused with linearly projected pixel embeddings. The fused representations were normalized, positionally encoded, and passed to the ViT encoder. A learnable [CLS] token aggregated patch-level information for classification. We evaluated RE-ViT on three public datasets (including BUSI, ChestXray2017, and Retinal OCT) using accuracy, macro AUC, sensitivity, and specificity. RE-ViT was benchmarked against CNN-based (VGG-16, ResNet) and hybrid (TransMed) models. Results: RE-ViT achieved state-of-the-art results: on BUSI, AUC=0.950+/-0.011; on ChestXray2017, AUC=0.989+/-0.004; on Retinal OCT, AUC=0.986+/-0.001, which outperforms other comparison models. Conclusions: The RE-ViT framework effectively integrates radiomics with ViT architectures, demonstrating improved performance and generalizability across multimodal medical image classification tasks.