Embedding Radiomics into Vision Transformers for Multimodal Medical Image Classification

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalizability and poor interpretability of Vision Transformers (ViTs) in few-shot medical image classification, this paper proposes Radiomics-Embedded Vision Transformer (RE-ViT), the first framework to perform end-to-end linear fusion of handcrafted radiomic features—such as Gray-Level Co-occurrence Matrix (GLCM) statistics—with patch embeddings in the early layers of ViT. This design seamlessly integrates domain-specific prior knowledge with data-driven representation learning, enhancing both model robustness and clinical interpretability. Evaluated on three public benchmarks—BUSI, ChestX-ray14, and Retinal OCT—RE-ViT achieves AUC scores of 0.950, 0.989, and 0.986, respectively, consistently outperforming state-of-the-art baselines including CNNs and TransMed. These results validate the efficacy and generalizability of radiomics-guided early fusion as a novel paradigm for medical ViT architectures.

Technology Category

Application Category

📝 Abstract
Background: Deep learning has significantly advanced medical image analysis, with Vision Transformers (ViTs) offering a powerful alternative to convolutional models by modeling long-range dependencies through self-attention. However, ViTs are inherently data-intensive and lack domain-specific inductive biases, limiting their applicability in medical imaging. In contrast, radiomics provides interpretable, handcrafted descriptors of tissue heterogeneity but suffers from limited scalability and integration into end-to-end learning frameworks. In this work, we propose the Radiomics-Embedded Vision Transformer (RE-ViT) that combines radiomic features with data-driven visual embeddings within a ViT backbone. Purpose: To develop a hybrid RE-ViT framework that integrates radiomics and patch-wise ViT embeddings through early fusion, enhancing robustness and performance in medical image classification. Methods: Following the standard ViT pipeline, images were divided into patches. For each patch, handcrafted radiomic features were extracted and fused with linearly projected pixel embeddings. The fused representations were normalized, positionally encoded, and passed to the ViT encoder. A learnable [CLS] token aggregated patch-level information for classification. We evaluated RE-ViT on three public datasets (including BUSI, ChestXray2017, and Retinal OCT) using accuracy, macro AUC, sensitivity, and specificity. RE-ViT was benchmarked against CNN-based (VGG-16, ResNet) and hybrid (TransMed) models. Results: RE-ViT achieved state-of-the-art results: on BUSI, AUC=0.950+/-0.011; on ChestXray2017, AUC=0.989+/-0.004; on Retinal OCT, AUC=0.986+/-0.001, which outperforms other comparison models. Conclusions: The RE-ViT framework effectively integrates radiomics with ViT architectures, demonstrating improved performance and generalizability across multimodal medical image classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Combines radiomics with Vision Transformers for medical imaging
Enhances robustness in medical image classification tasks
Addresses data-intensity and domain-bias limitations of ViTs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines radiomic features with Vision Transformers
Early fusion of handcrafted and data-driven embeddings
Achieves state-of-the-art multimodal classification performance
Z
Zhenyu Yang
Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
Haiming Zhu
Haiming Zhu
Jiangsu Provincial University Key (Construction) Laboratory for Smart Diagnosis and Treatment of Lung Cancer, Kunshan, Jiangsu, China; Department of Radiotherapy and Oncology, The First People’s Hospital of Kunshan, Kunshan, Jiangsu, China
R
Rihui Zhang
Jiangsu Provincial University Key (Construction) Laboratory for Smart Diagnosis and Treatment of Lung Cancer, Kunshan, Jiangsu, China
H
Haipeng Zhang
Jiangsu Provincial University Key (Construction) Laboratory for Smart Diagnosis and Treatment of Lung Cancer, Kunshan, Jiangsu, China; Department of Radiation Oncology, Duke University, Durham, NC, United States
J
Jianliang Wang
Jiangsu Provincial University Key (Construction) Laboratory for Smart Diagnosis and Treatment of Lung Cancer, Kunshan, Jiangsu, China; Department of Radiation Oncology, Duke University, Durham, NC, United States
C
Chunhao Wang
Department of Radiation Oncology, Duke University, Durham, NC, United States
M
Minbin Chen
Jiangsu Provincial University Key (Construction) Laboratory for Smart Diagnosis and Treatment of Lung Cancer, Kunshan, Jiangsu, China; Department of Radiology, The First People’s Hospital of Kunshan, Kunshan, Jiangsu, China
Fang-Fang Yin
Fang-Fang Yin
Professor of Radiation Oncology, Duke University
medical physicsimaging