🤖 AI Summary
Machine learning models for clinical prediction often violate medical domain knowledge—e.g., exhibiting non-monotonic relationships between cancer stage and survival—revealing a critical misalignment with expert knowledge.
Method: This paper introduces the first feedback-driven alignment framework for non-generative AI clinical risk models, using prostate cancer prognosis prediction as a case study. It elicits expert preferences via structured clinical questionnaires, formalizes them as verifiable constraints, and integrates them into a constrained learning pipeline. Alignment is quantified through model behavior analysis and randomized controlled experiments.
Contribution/Results: The constrained model preserves predictive performance while significantly improving clinical interpretability and discriminability on critical cases. This work provides the first empirical validation of expert-feedback-driven behavioral regularization for clinical AI, demonstrating both feasibility and efficacy in bridging the gap between data-driven predictions and medical expertise.
📝 Abstract
Over the past decade, the use of machine learning (ML) models in healthcare applications has rapidly increased. Despite high performance, modern ML models do not always capture patterns the end user requires. For example, a model may predict a non-monotonically decreasing relationship between cancer stage and survival, keeping all other features fixed. In this paper, we present a reproducible framework for investigating this misalignment between model behavior and clinical experiential learning, focusing on the effects of underspecification of modern ML pipelines. In a prostate cancer outcome prediction case study, we first identify and address these inconsistencies by incorporating clinical knowledge, collected by a survey, via constraints into the ML model, and subsequently analyze the impact on model performance and behavior across degrees of underspecification. The approach shows that aligning the ML model with clinical experiential learning is possible without compromising performance. Motivated by recent literature in generative AI, we further examine the feasibility of a feedback-driven alignment approach in non-generative AI clinical risk prediction models through a randomized experiment with clinicians. Our findings illustrate that, by eliciting clinicians' model preferences using our proposed methodology, the larger the difference in how the constrained and unconstrained models make predictions for a patient, the more apparent the difference is in clinical interpretation.