🤖 AI Summary
In high-stakes domains (e.g., healthcare, education, finance), AI explanations must be both actionable and contestable; however, existing methods frequently overlook how legal context shapes user behavior and liability attribution. Method: This paper introduces the “legally informed explainable AI” paradigm, proposing the first systematic framework for *legally aware explainability*. It differentially models legal information needs and action pathways for three key stakeholders—clinicians, patients, and regulators—integrating legal knowledge graphs, context-aware explanation generation, stakeholder-specific demand modeling, and human-AI co-design. Contribution/Results: The work yields implementable design principles and practical guidelines that significantly enhance the usability, defensibility, and liability clarity of AI explanations in real-world legal settings, thereby advancing accountable, contestable, and regulation-compliant AI governance.
📝 Abstract
Explanations for artificial intelligence (AI) systems are intended to support the people who are impacted by AI systems in high-stakes decision-making environments, such as doctors, patients, teachers, students, housing applicants, and many others. To protect people and support the responsible development of AI, explanations need to be actionable--helping people take pragmatic action in response to an AI system--and contestable--enabling people to push back against an AI system and its determinations. For many high-stakes domains, such as healthcare, education, and finance, the sociotechnical environment includes significant legal implications that impact how people use AI explanations. For example, physicians who use AI decision support systems may need information on how accepting or rejecting an AI determination will protect them from lawsuits or help them advocate for their patients. In this paper, we make the case for Legally-Informed Explainable AI, responding to the need to integrate and design for legal considerations when creating AI explanations. We describe three stakeholder groups with different informational and actionability needs, and provide practical recommendations to tackle design challenges around the design of explainable AI systems that incorporate legal considerations.