From Explainability to Action: A Generative Operational Framework for Integrating XAI in Clinical Mental Health Screening

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XAI methods for mental health screening produce only technical explanations (e.g., feature importance scores), lacking clinical relevance and actionable guidance—resulting in a critical gap between interpretability and real-world utility. To address this, we propose the first generative operational framework centered on a large language model (LLM) acting as a “translation engine.” It integrates model-agnostic XAI outputs (e.g., SHAP, LIME) with authoritative clinical guidelines via retrieval-augmented generation (RAG), enabling trustworthy, clinically grounded narrative explanations. Our framework is the first to systematically unify XAI techniques with domain-specific knowledge, transforming isolated feature attributions into personalized, executable clinical recommendations that support clinicians, patients, and developers collaboratively. Empirical evaluation demonstrates substantial improvements in the usability of AI explanations within authentic clinical workflows—facilitating bias detection, individualized patient communication, and translation toward evidence-based decision-making.

Technology Category

Application Category

📝 Abstract
Explainable Artificial Intelligence (XAI) has been presented as the critical component for unlocking the potential of machine learning in mental health screening (MHS). However, a persistent lab-to-clinic gap remains. Current XAI techniques, such as SHAP and LIME, excel at producing technically faithful outputs such as feature importance scores, but fail to deliver clinically relevant, actionable insights that can be used by clinicians or understood by patients. This disconnect between technical transparency and human utility is the primary barrier to real-world adoption. This paper argues that this gap is a translation problem and proposes the Generative Operational Framework, a novel system architecture that leverages Large Language Models (LLMs) as a central translation engine. This framework is designed to ingest the raw, technical outputs from diverse XAI tools and synthesize them with clinical guidelines (via RAG) to automatically generate human-readable, evidence-backed clinical narratives. To justify our solution, we provide a systematic analysis of the components it integrates, tracing the evolution from intrinsic models to generative XAI. We demonstrate how this framework directly addresses key operational barriers, including workflow integration, bias mitigation, and stakeholder-specific communication. This paper also provides a strategic roadmap for moving the field beyond the generation of isolated data points toward the delivery of integrated, actionable, and trustworthy AI in clinical practice.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap between technical XAI outputs and clinical utility in mental health screening
Translating feature importance scores into actionable insights for clinicians and patients
Overcoming operational barriers to integrate trustworthy AI into clinical workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs translate XAI outputs into clinical narratives
RAG integrates clinical guidelines for evidence-based insights
Generative framework bridges technical transparency with clinical utility
🔎 Similar Papers
No similar papers found.