See What I Mean? CUE: A Cognitive Model of Understanding Explanations

📅 2025-05-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current XAI evaluation overemphasizes technical fidelity while neglecting cognitive accessibility—particularly for visually impaired users. To address this, we propose the CUE cognitive model, the first to decouple explanation quality into three sequential, human-centered dimensions: legibility (perceptual clarity), readability (comprehensibility), and interpretability (inferential utility), with formal definitions of user-oriented explanation properties. Through cognitive modeling, controlled color map comparisons (BWR, Cividis, Coolwarm), and a large-scale user study (N=455), we find that accessibility-optimized colormaps (e.g., Cividis) do not consistently improve visual impairment users’ confidence or perceived cognitive effort—and in some cases significantly worsen both. These findings challenge prevailing perceptual-optimization paradigms in XAI visualization. Our work advances the design of adaptive, user-customized XAI interfaces grounded in empirical cognitive science.

Technology Category

Application Category

📝 Abstract
As machine learning systems increasingly inform critical decisions, the need for human-understandable explanations grows. Current evaluations of Explainable AI (XAI) often prioritize technical fidelity over cognitive accessibility which critically affects users, in particular those with visual impairments. We propose CUE, a model for Cognitive Understanding of Explanations, linking explanation properties to cognitive sub-processes: legibility (perception), readability (comprehension), and interpretability (interpretation). In a study (N=455) testing heatmaps with varying colormaps (BWR, Cividis, Coolwarm), we found comparable task performance but lower confidence/effort for visually impaired users. Unlike expected, these gaps were not mitigated and sometimes worsened by accessibility-focused color maps like Cividis. These results challenge assumptions about perceptual optimization and support the need for adaptive XAI interfaces. They also validate CUE by demonstrating that altering explanation legibility affects understandability. We contribute: (1) a formalized cognitive model for explanation understanding, (2) an integrated definition of human-centered explanation properties, and (3) empirical evidence motivating accessible, user-tailored XAI.
Problem

Research questions and friction points this paper is trying to address.

Addressing cognitive accessibility gaps in Explainable AI evaluations
Investigating how explanation legibility affects user understanding and confidence
Proposing adaptive XAI interfaces for users with visual impairments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes CUE cognitive model linking explanation properties
Tests heatmap colormaps on visually impaired users
Validates model showing legibility affects understandability
🔎 Similar Papers
No similar papers found.