🤖 AI Summary
In current XAI research, “contestability”—the capacity for stakeholders to effectively challenge AI-driven decisions—lacks a formal definition, algorithmic guarantees, and actionable regulatory implementation pathways, rendering it largely aspirational.
Method: We conduct a systematic literature review, formal modeling, human-centered interface design, and multi-domain case validation to identify critical gaps in state-of-the-art (SOTA) systems.
Contribution/Results: We propose the first verifiable, engineering-oriented formal definition of contestability; develop a modular design framework spanning human-AI interaction, technical architecture, legal procedures, and organizational governance; and introduce the Contestability Assessment Scale (CAS), the first quantitative evaluation instrument comprising over 20 measurable indicators. Empirical validation demonstrates that our framework enables targeted, implementable improvements—equipping developers with practical tools to embed genuine redress pathways and accountability mechanisms into AI systems.
📝 Abstract
As AI regulations around the world intensify their focus on system safety, contestability has become a mandatory, yet ill-defined, safeguard. In XAI,"contestability"remains an empty promise: no formal definition exists, no algorithm guarantees it, and practitioners lack concrete guidance to satisfy regulatory requirements. Grounded in a systematic literature review, this paper presents the first rigorous formal definition of contestability in explainable AI, directly aligned with stakeholder requirements and regulatory mandates. We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical architectures, legal processes, and organizational workflows. To operationalize our framework, we propose the Contestability Assessment Scale, a composite metric built on more than twenty quantitative criteria. Through multiple case studies across diverse application domains, we reveal where state-of-the-art systems fall short and show how our framework drives targeted improvements. By converting contestability from regulatory theory into a practical framework, our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.