🤖 AI Summary
AI ethics practices frequently fail due to power imbalances embedded in sociotechnical systems—most notably, explainable AI (XAI) systems that lack genuine ethical functionality. This paper identifies the root cause as unjust, unaccountable power structures that marginalize vulnerable stakeholders and suppress pluralistic values. Methodologically, we integrate XAI techniques, sociotechnical systems analysis, critical technical practice, and ethical value encoding to diagnose four empirically grounded cases of ethical failure. Building on this analysis, we propose a systemic intervention framework comprising three interlocking strategies: narrative reframing, institutionalized power-balancing mechanisms, and techno-ethical co-design. Our contribution shifts AI governance from mere technical compliance toward deep value embedding—offering both theoretical grounding and actionable pathways for equitable, responsible AI development and deployment. (149 words)
📝 Abstract
The operationalization of ethics in the technical practices of artificial intelligence (AI) is facing significant challenges. To address the problem of ineffective implementation of AI ethics, we present our diagnosis, analysis, and interventional recommendations from a unique perspective of the real-world implementation of AI ethics through explainable AI (XAI) techniques. We first describe the phenomenon (i.e., the "symptoms") of ineffective implementation of AI ethics in explainable AI using four empirical cases. From the "symptoms", we diagnose the root cause (i.e., the "disease") being the dysfunction and imbalance of power structures in the sociotechnical system of AI. The power structures are dominated by unjust and unchecked power that does not represent the benefits and interests of the public and the most impacted communities, and cannot be countervailed by ethical power. Based on the understanding of power mechanisms, we propose three interventional recommendations to tackle the root cause, including: 1) Making power explicable and checked, 2) Reframing the narratives and assumptions of AI and AI ethics to check unjust power and reflect the values and benefits of the public, and 3) Uniting the efforts of ethical and scientific conduct of AI to encode ethical values as technical standards, norms, and methods, including conducting critical examinations and limitation analyses of AI technical practices. We hope that our diagnosis and interventional recommendations can be a useful input to the AI community and civil society's ongoing discussion and implementation of ethics in AI for ethical and responsible AI practice.