🤖 AI Summary
This work proposes a large language model (LLM)-based coordination framework to address model divergence arising from mismatches in how humans and robots interpret the environment and actions in human-robot collaboration. The framework generates interpretable explanations of discrepancies without requiring explicit modeling of the user’s mental state and enables closed-loop recovery through a human-in-the-loop correction mechanism. Integrating shared control, digital twin technology, and a mobile manipulator platform, the approach is validated on a wheelchair-mounted robotic arm system. Experimental results demonstrate real-time explanation and collaborative resolution of model divergence, significantly enhancing the robustness and transparency of human-robot teamwork.
📝 Abstract
Whenever humans and robots work together, it is essential that unexpected robot behavior can be explained to the user. Especially in applications such as shared control the user and the robot must share the same model of the objects in the world, and the actions that can be performed on these objects. In this paper, we achieve this with a so-called model reconciliation framework. We leverage a Large Language Model to predict and explain the difference between the robot's and the human's mental models, without the need of a formal mental model of the user. Furthermore, our framework aims to solve the model divergence after the explanation by allowing the human to correct the robot. We provide an implementation in an assistive robotics domain, where we conduct a set of experiments with a real wheelchair-based mobile manipulator and its digital twin.