Legally-Informed Explainable AI

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes domains (e.g., healthcare, education, finance), AI explanations must be both actionable and contestable; however, existing methods frequently overlook how legal context shapes user behavior and liability attribution. Method: This paper introduces the “legally informed explainable AI” paradigm, proposing the first systematic framework for *legally aware explainability*. It differentially models legal information needs and action pathways for three key stakeholders—clinicians, patients, and regulators—integrating legal knowledge graphs, context-aware explanation generation, stakeholder-specific demand modeling, and human-AI co-design. Contribution/Results: The work yields implementable design principles and practical guidelines that significantly enhance the usability, defensibility, and liability clarity of AI explanations in real-world legal settings, thereby advancing accountable, contestable, and regulation-compliant AI governance.

Technology Category

Application Category

📝 Abstract
Explanations for artificial intelligence (AI) systems are intended to support the people who are impacted by AI systems in high-stakes decision-making environments, such as doctors, patients, teachers, students, housing applicants, and many others. To protect people and support the responsible development of AI, explanations need to be actionable--helping people take pragmatic action in response to an AI system--and contestable--enabling people to push back against an AI system and its determinations. For many high-stakes domains, such as healthcare, education, and finance, the sociotechnical environment includes significant legal implications that impact how people use AI explanations. For example, physicians who use AI decision support systems may need information on how accepting or rejecting an AI determination will protect them from lawsuits or help them advocate for their patients. In this paper, we make the case for Legally-Informed Explainable AI, responding to the need to integrate and design for legal considerations when creating AI explanations. We describe three stakeholder groups with different informational and actionability needs, and provide practical recommendations to tackle design challenges around the design of explainable AI systems that incorporate legal considerations.
Problem

Research questions and friction points this paper is trying to address.

Develop actionable AI explanations for high-stakes decisions
Ensure AI explanations are contestable to protect user rights
Integrate legal considerations into explainable AI system design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Legally-Informed Explainable AI design
Actionable and contestable AI explanations
Legal considerations for stakeholder needs
🔎 Similar Papers
No similar papers found.