🤖 AI Summary
This study addresses the labor-intensive manual review of high-severity safety event reports in radiation oncology and the poor cross-institutional generalizability of existing models. We propose BlueBERT_TRANSFER, a cross-institutional transfer learning framework based on BlueBERT, which incorporates domain adaptation and parameter-efficient fine-tuning to significantly enhance model generalization across heterogeneous clinical data. Compared with traditional SVM and standard BlueBERT, BlueBERT_TRANSFER achieves an AUROC of 0.82 on the local test set and improves to 0.78 on cross-institutional evaluation—demonstrating robust out-of-distribution performance. Notably, its discriminative accuracy on a manually corrected subset matches that of clinical experts. To our knowledge, this is the first systematic application of lightweight transfer learning to automate severity classification of radiation oncology safety events. The framework provides a scalable, low-dependency NLP solution for multicenter safety surveillance, minimizing reliance on institution-specific expert annotations.
📝 Abstract
PURPOSE: Incident reports are an important tool for safety and quality improvement in healthcare, but manual review is time-consuming and requires subject matter expertise. Here we present a natural language processing (NLP) screening tool to detect high-severity incident reports in radiation oncology across two institutions.
METHODS AND MATERIALS: We used two text datasets to train and evaluate our NLP models: 7,094 reports from our institution (Inst.), and 571 from IAEA SAFRON (SF), all of which had severity scores labeled by clinical content experts. We trained and evaluated two types of models: baseline support vector machines (SVM) and BlueBERT which is a large language model pretrained on PubMed abstracts and hospitalized patient data. We assessed for generalizability of our model in two ways. First, we evaluated models trained using Inst.-train on SF-test. Second, we trained a BlueBERT_TRANSFER model that was first fine-tuned on Inst.-train then on SF-train before testing on SF-test set. To further analyze model performance, we also examined a subset of 59 reports from our Inst. dataset, which were manually edited for clarity.
RESULTS Classification performance on the Inst. test achieved AUROC 0.82 using SVM and 0.81 using BlueBERT. Without cross-institution transfer learning, performance on the SF test was limited to an AUROC of 0.42 using SVM and 0.56 using BlueBERT. BlueBERT_TRANSFER, which was fine-tuned on both datasets, improved the performance on SF test to AUROC 0.78. Performance of SVM, and BlueBERT_TRANSFER models on the manually curated Inst. reports (AUROC 0.85 and 0.74) was similar to human performance (AUROC 0.81).
CONCLUSION: In summary, we successfully developed cross-institution NLP models on incident report text from radiation oncology centers. These models were able to detect high-severity reports similarly to humans on a curated dataset.