🤖 AI Summary
Existing causal responsibility models are restricted to discrete action spaces, rendering them inadequate for characterizing spatial interaction responsibility in continuous-action domains such as autonomous driving and service robotics.
Method: This paper introduces the first extension of the FeAR (Forward and Backward Responsibility) metric to continuous action spaces, proposing a scene-embedding–guided, kinematics-constrained action manifold optimization framework that jointly integrates counterfactual reasoning and feasibility modeling to support both backward attribution analysis and forward decision guidance.
Contribution/Results: The approach enables computationally tractable, interpretable, and intervention-aware causal responsibility modeling. Evaluated on canonical spatially shared conflict scenarios, it demonstrates consistent responsibility assessment and significantly improves agent decision safety and auditability—thereby enhancing accountability in real-world robotic systems.
📝 Abstract
Understanding the causal influence of one agent on another agent is crucial for safely deploying artificially intelligent systems such as automated vehicles and mobile robots into human-inhabited environments. Existing models of causal responsibility deal with simplified abstractions of scenarios with discrete actions, thus, limiting real-world use when understanding responsibility in spatial interactions. Based on the assumption that spatially interacting agents are embedded in a scene and must follow an action at each instant, Feasible Action-Space Reduction (FeAR) was proposed as a metric for causal responsibility in a grid-world setting with discrete actions. Since real-world interactions involve continuous action spaces, this paper proposes a formulation of the FeAR metric for measuring causal responsibility in space-continuous interactions. We illustrate the utility of the metric in prototypical space-sharing conflicts, and showcase its applications for analysing backward-looking responsibility and in estimating forward-looking responsibility to guide agent decision making. Our results highlight the potential of the FeAR metric for designing and engineering artificial agents, as well as for assessing the responsibility of agents around humans.