CHiRPE: A Step Towards Real-World Clinical NLP with Clinician-Oriented Model Explanations

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited clinical interpretability of existing NLP tools, which often fail to align with physicians’ reasoning logic. Leveraging 944 semi-structured clinical interview transcripts, the authors develop an end-to-end NLP pipeline that integrates symptom-domain mapping, LLM-based summarization, and BERT classifiers to achieve high-accuracy prediction of psychosis risk—exceeding 90% accuracy across three BERT variants. To enhance clinical utility, the work introduces a novel physician-in-the-loop, concept-guided SHAP explanation framework, presented through a hybrid graph-text summary format. This approach significantly improves interpretability from a clinical perspective. In evaluations by 28 clinical experts, the proposed explanation format was strongly preferred over existing baseline methods, demonstrating its practical relevance and usability in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
The medical adoption of NLP tools requires interpretability by end users, yet traditional explainable AI (XAI) methods are misaligned with clinical reasoning and lack clinician input. We introduce CHiRPE (Clinical High-Risk Prediction with Explainability), an NLP pipeline that takes transcribed semi-structured clinical interviews to: (i) predict psychosis risk; and (ii) generate novel SHAP explanation formats co-developed with clinicians. Trained on 944 semi-structured interview transcripts across 24 international clinics of the AMP-SCZ study, the CHiRPE pipeline integrates symptom-domain mapping, LLM summarisation, and BERT classification. CHiRPE achieved over 90% accuracy across three BERT variants and outperformed baseline models. Explanation formats were evaluated by 28 clinical experts who indicated a strong preference for our novel concept-guided explanations, especially hybrid graph-and-text summary formats. CHiRPE demonstrates that clinically-guided model development produces both accurate and interpretable results. Our next step is focused on real-world testing across our 24 international sites.
Problem

Research questions and friction points this paper is trying to address.

Clinical NLP
Explainable AI
Model Interpretability
Psychosis Risk Prediction
Clinician-Oriented Explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

clinician-oriented explainability
concept-guided explanations
hybrid graph-and-text summary
symptom-domain mapping
clinical NLP
🔎 Similar Papers
No similar papers found.
S
Stephanie Fong
Orygen and The University of Melbourne
Zimu Wang
Zimu Wang
Tsinghua University
recommendation
G
Guilherme C. Oliveira
AIM for Health Lab, Monash University
X
Xiangyu Zhao
AIM for Health Lab, Monash University
Y
Yiwen Jiang
AIM for Health Lab, Monash University
J
Jiahe Liu
AIM for Health Lab, Monash University
B
Beau-Luke Colton
Orygen and The University of Melbourne
S
Scott Woods
Yale School of Medicine, Yale University
M
M. Shenton
Brigham and Women’s Hospital, Harvard Medical School
Barnaby Nelson
Barnaby Nelson
University of Melbourne
PsychosisSchizophreniaPhenomenologyPsychosis riskNeuroscience
Z
Zongyuan Ge
AIM for Health Lab, Monash University
D
Dominic Dwyer
Orygen and The University of Melbourne; AIM for Health Lab, Monash University