🤖 AI Summary
This study investigates how constraining human supervisors’ intervention options in intelligent assistance systems may diminish their sense of moral responsibility, particularly when system errors lead to adverse outcomes. Through a between-subjects drone supervision simulation experiment, the authors manipulated the number of actionable choices provided by the AI (6, 4, 2, or 1) and integrated scenario-based simulations, human–AI interaction design, and validated moral psychology scales for quantitative analysis. The findings provide the first empirical evidence that fewer available interventions significantly reduce supervisors’ perceived moral responsibility, while attributions of responsibility to the AI or its developers remain unaffected. These results demonstrate a direct influence of system design on moral responsibility allocation, offering critical theoretical insights and practical guidance for designing responsible AI interfaces and supervisory architectures.
📝 Abstract
AI-based systems can increasingly perform work tasks autonomously. In safety-critical tasks, human oversight of these systems is required to mitigate risks and to ensure responsibility in case something goes wrong. Since people often struggle to stay focused and perform good oversight, intelligent support systems are used to assist them, giving decision recommendations, alerting users, or restricting them from dangerous actions. However, in cases where recommendations are wrong, decision support might undermine the very reason why human oversight was employed -- genuine moral responsibility. The goal of our study was to investigate how a decision support system that restricted available interventions would affect overseer's perceived moral responsibility, in particular in cases where the support errs. In a simulated oversight experiment, participants (\textit{N}=274) monitored an autonomous drone that faced ten critical situations, choosing from six possible actions to resolve each situation. An AI system constrained participants'choices to either six, four, two, or only one option (between-subject study). Results showed that participants, who were restricted to choosing from a single action, felt less morally responsible if a crash occurred. At the same time, participants'judgments about the responsibility of other stakeholders (the AI; the developer of the AI) did not change between conditions. Our findings provide important insights for user interface design and oversight architectures: they should prevent users from attributing moral agency to AI, help them understand how moral responsibility is distributed, and, when oversight aims to prevent ethically undesirable outcomes, be designed to support the epistemic and causal conditions required for moral responsibility.