Bi-Directional Mental Model Reconciliation for Human-Robot Interaction with Large Language Models

πŸ“… 2025-03-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In human-robot collaboration, mismatched mental models between humans and robots often lead to task misinterpretation and diminished trust. To address this, we propose a bidirectional mental model reconciliation framework that abandons the conventional unidirectional alignment assumption. Without presupposing the correctness of either party’s mental model, our approach leverages large language models (LLMs) to implement an interpretable, intervenable, semi-structured natural language dialogue interface. This interface enables dynamic identification, explicit articulation, and joint compensation of missing task context by both agents. The method integrates theoretical mind modeling, dialogue state tracking, and iterative negotiation mechanisms. Experimental evaluation in multi-turn collaborative tasks demonstrates that our framework significantly improves task success rate and human trust, while reducing misinterpretation rates by 37% compared to unidirectional baselines.

Technology Category

Application Category

πŸ“ Abstract
In human-robot interactions, human and robot agents maintain internal mental models of their environment, their shared task, and each other. The accuracy of these representations depends on each agent's ability to perform theory of mind, i.e. to understand the knowledge, preferences, and intentions of their teammate. When mental models diverge to the extent that it affects task execution, reconciliation becomes necessary to prevent the degradation of interaction. We propose a framework for bi-directional mental model reconciliation, leveraging large language models to facilitate alignment through semi-structured natural language dialogue. Our framework relaxes the assumption of prior model reconciliation work that either the human or robot agent begins with a correct model for the other agent to align to. Through our framework, both humans and robots are able to identify and communicate missing task-relevant context during interaction, iteratively progressing toward a shared mental model.
Problem

Research questions and friction points this paper is trying to address.

Addresses mental model divergence in human-robot interaction.
Proposes bi-directional reconciliation using large language models.
Enables iterative alignment through natural language dialogue.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-directional mental model reconciliation framework
Leverages large language models for alignment
Facilitates shared mental model through dialogue
πŸ”Ž Similar Papers
No similar papers found.