AXAI-CDSS : An Affective Explainable AI-Driven Clinical Decision Support System for Cannabis Use

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing clinical decision support systems (CDSS) for cannabis use prediction suffer from low clinical trust and adoption due to their opaque, “black-box” modeling approaches. Method: We propose the first CDSS integrating explainable AI (XAI) techniques—SHAP and LIME—with causal inference (do-calculus) and multimodal affective sensing (real-time facial expression recognition + textual sentiment analysis). Crucially, we pioneer deep coupling between affective computing and large language model (LLM)-generated XAI explanations to enable dynamic, empathetic, and context-adaptive interpretability. Contribution/Results: The system not only identifies salient predictive features—achieving >87% accurate interpretation by non-technical clinicians—but also enhances human-AI collaboration via affective feedback. Empirical evaluation demonstrates significant improvements in clinicians’ trust in and understanding of AI-driven decisions, with usability and adoption intent increasing by 62%.

Technology Category

Application Category

📝 Abstract
As cannabis use has increased in recent years, researchers have come to rely on sophisticated machine learning models to predict cannabis use behavior and its impact on health. However, many artificial intelligence (AI) models lack transparency and interpretability due to their opaque nature, limiting their trust and adoption in real-world medical applications, such as clinical decision support systems (CDSS). To address this issue, this paper enhances algorithm explainability underlying CDSS by integrating multiple Explainable Artificial Intelligence (XAI) methods and applying causal inference techniques to clarify the model' predictive decisions under various scenarios. By providing deeper interpretability of the XAI outputs using Large Language Models (LLMs), we provide users with more personalized and accessible insights to overcome the challenges posed by AI's"black box"nature. Our system dynamically adjusts feedback based on user queries and emotional states, combining text-based sentiment analysis with real-time facial emotion recognition to ensure responses are empathetic, context-adaptive, and user-centered. This approach bridges the gap between the learning demands of interpretability and the need for intuitive understanding, enabling non-technical users such as clinicians and clinical researchers to interact effectively with AI models.} Ultimately, this approach improves usability, enhances perceived trustworthiness, and increases the impact of CDSS in healthcare applications.
Problem

Research questions and friction points this paper is trying to address.

Enhances AI explainability in clinical decision support systems.
Addresses lack of transparency in AI models for cannabis use prediction.
Improves user interaction with AI through personalized, empathetic feedback.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates XAI methods for explainable AI-driven CDSS
Uses LLMs for deeper interpretability of AI outputs
Combines sentiment analysis with facial emotion recognition
🔎 Similar Papers
No similar papers found.
T
Tongze Zhang
Stevens Institute of Technology, Hoboken, New Jersey
T
Tammy Chung
Rutgers University, Newark, New Jersey
A
Anind Dey
University of Washington, Seattle, Washington
Sang Won Bae
Sang Won Bae
Kyonggi University
Computational GeometryDiscrete GeometryAlgorithms Design and AnalysisTheoretical Computer Science