🤖 AI Summary
This work addresses collaborative navigation in human-robot shared workspaces, where safety-critical constraints—including obstacle avoidance and inter-agent spacing—must be jointly satisfied.
Method: We propose a dynamic game-theoretic framework based on non-normalized shared-constraint equilibrium, enabling asymmetric safety responsibility allocation according to human and robot capabilities. The equilibrium solution is embedded within a receding-horizon optimal control architecture, integrating shared-constraint modeling with model predictive control (MPC) for online, safe, and efficient coordination.
Contribution/Results: The approach ensures robustness without sacrificing task efficiency, and experimental evaluation demonstrates its stability, adaptability, and flexible responsibility assignment in complex, dynamic environments. By explicitly encoding interpretable, tunable safety obligations into the decision-making process, our framework establishes a novel paradigm for explainable and adjustable human–robot co-navigational systems.
📝 Abstract
This paper proposes a dynamic game formulation for cooperative human-robot navigation in shared workspaces with obstacles, where the human and robot jointly satisfy shared safety constraints while pursuing a common task. A key contribution is the introduction of a non-normalized equilibrium structure for the shared constraints. This structure allows the two agents to contribute different levels of effort towards enforcing safety requirements such as collision avoidance and inter-players spacing. We embed this non-normalized equilibrium into a receding-horizon optimal control scheme.