Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XAI methods heavily rely on visual modalities, neglecting accessibility requirements for visually impaired users and lacking systematic evaluation frameworks tailored to persons with disabilities. Method: This study introduces the first systematic investigation into inclusivity gaps in XAI for disabled users, proposing a four-stage inclusive design framework: (1) cross-modal explanation categorization; (2) contextualized user modeling—including persona development for diverse disability profiles; (3) non-visual-first prototype development supporting speech, haptics, and structured audio feedback; and (4) iterative co-evaluation with domain experts and blind/low-vision users. Contribution/Results: Empirical findings demonstrate that semantic simplification and multimodal integration—particularly speech combined with rhythmically structured audio—significantly improve explanation comprehensibility and user trust. The framework provides a reusable methodology and actionable pathway toward equitable, accessible XAI systems.

Technology Category

Application Category

📝 Abstract
As AI systems are increasingly deployed to support decision-making in critical domains, explainability has become a means to enhance the understandability of these outputs and enable users to make more informed and conscious choices. However, despite growing interest in the usability of eXplainable AI (XAI), the accessibility of these methods, particularly for users with vision impairments, remains underexplored. This paper investigates accessibility gaps in XAI through a two-pronged approach. First, a literature review of 79 studies reveals that evaluations of XAI techniques rarely include disabled users, with most explanations relying on inherently visual formats. Second, we present a four-part methodological proof of concept that operationalizes inclusive XAI design: (1) categorization of AI systems, (2) persona definition and contextualization, (3) prototype design and implementation, and (4) expert and user assessment of XAI techniques for accessibility. Preliminary findings suggest that simplified explanations are more comprehensible for non-visual users than detailed ones, and that multimodal presentation is required for more equitable interpretability.
Problem

Research questions and friction points this paper is trying to address.

Investigates accessibility gaps in XAI for vision-impaired users
Evaluates lack of disabled user inclusion in XAI studies
Proposes inclusive XAI design with multimodal explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inclusive XAI design with multimodal presentation
Four-part method for accessible AI explanations
Simplified explanations enhance non-visual user comprehension
🔎 Similar Papers
No similar papers found.
M
Maria J. P. Peixoto
Ontario Tech University
A
Akriti Pandey
Ontario Tech University
A
Ahsan Zaman
Ontario Tech University
Peter R. Lewis
Peter R. Lewis
Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech University
Artificial IntelligenceSelf-AwarenessSocio-Technical SystemsArtificial LifeTrust