🤖 AI Summary
This study addresses the limited clinical interpretability of IVF outcome prediction models, identifying a critical gap in existing eXplainable AI (XAI) methods: their overreliance on internal model features while neglecting patients’ genuine concerns regarding data perturbations, feature exclusion rationale, and personalized explanations—undermining trust. Drawing on multi-year anonymized user feedback, structured questionnaires, in-depth interviews, and qualitative analysis, we propose a patient-centered explainability framework. It introduces a conversational interactive interface design paradigm and pioneers the integration of three novel components into medical XAI: data shift awareness, transparent exclusion mechanisms, and context-aware explanation generation. Empirical evaluation demonstrates significant improvements in patients’ comprehension of prediction logic and trust in model outputs. The framework transcends conventional XAI’s narrow focus on feature importance, offering both theoretical foundations and actionable design principles for trustworthy AI in high-stakes clinical decision-making.
📝 Abstract
This paper evaluates the user interface of an in vitro fertility (IVF) outcome prediction tool, focussing on its understandability for patients or potential patients. We analyse four years of anonymous patient feedback, followed by a user survey and interviews to quantify trust and understandability. Results highlight a lay user's need for prediction model emph{explainability} beyond the model feature space. We identify user concerns about data shifts and model exclusions that impact trust. The results call attention to the shortcomings of current practices in explainable AI research and design and the need for explainability beyond model feature space and epistemic assumptions, particularly in high-stakes healthcare contexts where users gather extensive information and develop complex mental models. To address these challenges, we propose a dialogue-based interface and explore user expectations for personalised explanations.