Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the capacity of simple AI models to accurately capture the complexity of human moral decision-making, using kidney allocation as a case study. Method: Through 20 in-depth interviews and thematic coding analysis, the study systematically identifies five key phenomenological features of moral judgment—differential valuation, heuristic reliance, perspective variability, confidence volatility, and ambivalent attitudes toward human-AI collaboration. Contribution/Results: It reveals that dominant AI ethics paradigms, which treat morality as static rule-following, fail to account for its dynamism, context-sensitivity, and metacognitive dimensions. In response, the paper proposes “participatory moral modeling”—a novel framework foregrounding the irreplaceable role of human judgment in human-AI co-decision-making. Grounded in phenomenological insight, this non-computational, practice-centered paradigm shifts AI ethics design from algorithm-centric to human-practice-centric foundations.

Technology Category

Application Category

📝 Abstract
A growing body of work in Ethical AI attempts to capture human moral judgments through simple computational models. The key question we address in this work is whether such simple AI models capture {the critical} nuances of moral decision-making by focusing on the use case of kidney allocation. We conducted twenty interviews where participants explained their rationale for their judgments about who should receive a kidney. We observe participants: (a) value patients' morally-relevant attributes to different degrees; (b) use diverse decision-making processes, citing heuristics to reduce decision complexity; (c) can change their opinions; (d) sometimes lack confidence in their decisions (e.g., due to incomplete information); and (e) express enthusiasm and concern regarding AI assisting humans in kidney allocation decisions. Based on these findings, we discuss challenges of computationally modeling moral judgments {as a stand-in for human input}, highlight drawbacks of current approaches, and suggest future directions to address these issues.
Problem

Research questions and friction points this paper is trying to address.

AI models struggle to capture human moral decision-making nuances.
Current AI approaches oversimplify kidney allocation judgments.
Human moral judgments involve diverse, dynamic, and uncertain factors.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Qualitative interviews to analyze moral decision-making
Focus on kidney allocation as a use case
Highlight challenges in modeling human moral judgments
🔎 Similar Papers
No similar papers found.