🤖 AI Summary
This study investigates the moral and emotional responses humans exhibit toward robot abuse, focusing on how perceived anthropomorphism and individual moral foundations jointly shape ethical judgments. Employing a mixed-methods approach (N = 201), the research integrates experimental video stimuli, the Moral Foundations Questionnaire, measures of emotional arousal and social distance, and qualitative analysis. Findings reveal that higher levels of anthropomorphism significantly heighten moral concern and anger while reducing perceived social distance. Moreover, participants’ moral foundations—particularly progressive orientations—moderate their reasoning pathways, leading to divergent patterns of moral evaluation. The results underscore anthropomorphism as a critical precondition for eliciting moral consideration, offering empirical insights for the ethical design of robots and effective public communication strategies in human–robot interaction contexts.
📝 Abstract
As robots become increasingly integrated into daily life, understanding responses to robot mistreatment carries important ethical and design implications. This mixed-methods study (N = 201) examined how anthropomorphic levels and moral foundations shape reactions to robot abuse. Participants viewed videos depicting physical mistreatment of robots varying in humanness (Spider, Twofoot, Humanoid) and completed measures assessing moral foundations, anger, and social distance. Results revealed that anthropomorphism determines whether people extend moral consideration to robots, while moral foundations shape how they reason about such consideration. Qualitative analysis revealed distinct reasoning patterns: low-progressivism individuals employed character-based judgments, while high-progressivism individuals engaged in future-oriented moral deliberation. Findings offer implications for robot design and policy communication.