đ¤ AI Summary
To address the clinical challenge of differentiating pseudoprogression (PsP) from true progression (TP) in glioblastoma patients post-radiotherapyâwhere conventional imaging exhibits limited diagnostic specificityâthis study proposes a self-supervised multimodal deep learning framework. The method integrates routine multiparametric MRI (FLAIR and contrast-enhanced T1-weighted), clinical variables, and radiotherapy planning data. A self-supervised Vision Transformer (ViT) extracts MRI features, while a cross-modal guided attention mechanism enables synergistic modeling across heterogeneous modalities. Leveraging unlabeled public datasets for pretraining mitigates label scarcity. Evaluated on multicenter data, the model achieves an AUC of 75.3%, significantly outperforming existing data-driven approaches. Critically, it relies solely on routinely acquired clinical and treatment data, ensuring strong generalizability, interpretability, and clinical deployability.
đ Abstract
Accurate differentiation of pseudoprogression (PsP) from True Progression (TP) following radiotherapy (RT) in glioblastoma (GBM) patients is crucial for optimal treatment planning. However, this task remains challenging due to the overlapping imaging characteristics of PsP and TP. This study therefore proposes a multimodal deep-learning approach utilizing complementary information from routine anatomical MR images, clinical parameters, and RT treatment planning information for improved predictive accuracy. The approach utilizes a self-supervised Vision Transformer (ViT) to encode multi-sequence MR brain volumes to effectively capture both global and local context from the high dimensional input. The encoder is trained in a self-supervised upstream task on unlabeled glioma MRI datasets from the open BraTS2021, UPenn-GBM, and UCSF-PDGM datasets to generate compact, clinically relevant representations from FLAIR and T1 post-contrast sequences. These encoded MR inputs are then integrated with clinical data and RT treatment planning information through guided cross-modal attention, improving progression classification accuracy. This work was developed using two datasets from different centers: the Burdenko Glioblastoma Progression Dataset (n = 59) for training and validation, and the GlioCMV progression dataset from the University Hospital Erlangen (UKER) (n = 20) for testing. The proposed method achieved an AUC of 75.3%, outperforming the current state-of-the-art data-driven approaches. Importantly, the proposed approach relies on readily available anatomical MRI sequences, clinical data, and RT treatment planning information, enhancing its clinical feasibility. The proposed approach addresses the challenge of limited data availability for PsP and TP differentiation and could allow for improved clinical decision-making and optimized treatment plans for GBM patients.