🤖 AI Summary
Current AI systems for long-term prognosis prediction in Alzheimer’s disease struggle to gain clinical trust due to limited interpretability, particularly under conditions of high uncertainty. This study investigates how different uncertainty visualizations—binary (present/absent) versus continuous (saturation-encoded)—influence user trust, confidence, and reliance through two user experiments involving both lay participants and neuroimaging/neurology experts. The findings reveal a significant interaction between visualization type and user expertise: continuous visualizations enhance perceived model reliability and facilitate recognition of model limitations, whereas binary visualizations bolster immediate decision confidence. Building on these insights, the work proposes empirically grounded design principles for trustworthy AI tailored to clinical decision support in neurodegenerative disease prognosis.
📝 Abstract
Artificial intelligence (AI) is increasingly used to support prognosis in Alzheimer's disease (AD), but adoption remains limited due to a lack of transparency and interpretability, particularly for long-term predictions where uncertainty is intrinsic and outcomes may not be known for years. We position uncertainty visualization as an explainable AI (XAI) technique and examine how it shapes trust, confidence, and reliance when users interpret AI-generated forecasts of future cognitive decline transitions. We conducted two studies, one with general participants (N=37) and one with experts in neuroimaging and neurology (N=10), to compare binary (present/absent) and continuous (saturation) uncertainty encodings. Continuous encodings improved perceived reliability and helped users recognize model limitations, while binary encodings increased momentary confidence, revealing expertise-dependent trade-offs in interpreting future predictions under high uncertainty. These findings surface key challenges in designing uncertainty representations for prognostic AI and culminate in a set of empirically grounded guidelines for creating trustworthy, user-appropriate clinical decision support tools.