🤖 AI Summary
Existing clinical decision support systems (CDSS) for cannabis use prediction suffer from low clinical trust and adoption due to their opaque, “black-box” modeling approaches.
Method: We propose the first CDSS integrating explainable AI (XAI) techniques—SHAP and LIME—with causal inference (do-calculus) and multimodal affective sensing (real-time facial expression recognition + textual sentiment analysis). Crucially, we pioneer deep coupling between affective computing and large language model (LLM)-generated XAI explanations to enable dynamic, empathetic, and context-adaptive interpretability.
Contribution/Results: The system not only identifies salient predictive features—achieving >87% accurate interpretation by non-technical clinicians—but also enhances human-AI collaboration via affective feedback. Empirical evaluation demonstrates significant improvements in clinicians’ trust in and understanding of AI-driven decisions, with usability and adoption intent increasing by 62%.
📝 Abstract
As cannabis use has increased in recent years, researchers have come to rely on sophisticated machine learning models to predict cannabis use behavior and its impact on health. However, many artificial intelligence (AI) models lack transparency and interpretability due to their opaque nature, limiting their trust and adoption in real-world medical applications, such as clinical decision support systems (CDSS). To address this issue, this paper enhances algorithm explainability underlying CDSS by integrating multiple Explainable Artificial Intelligence (XAI) methods and applying causal inference techniques to clarify the model' predictive decisions under various scenarios. By providing deeper interpretability of the XAI outputs using Large Language Models (LLMs), we provide users with more personalized and accessible insights to overcome the challenges posed by AI's"black box"nature. Our system dynamically adjusts feedback based on user queries and emotional states, combining text-based sentiment analysis with real-time facial emotion recognition to ensure responses are empathetic, context-adaptive, and user-centered. This approach bridges the gap between the learning demands of interpretability and the need for intuitive understanding, enabling non-technical users such as clinicians and clinical researchers to interact effectively with AI models.} Ultimately, this approach improves usability, enhances perceived trustworthiness, and increases the impact of CDSS in healthcare applications.