Gaze-supported Large Language Model Framework for Bi-directional Human-Robot Interaction

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of bidirectional, multimodal, and context-aware human–robot interaction in assistive robotics, this paper proposes a modular, bidirectional interaction framework grounded in large language models (LLMs). The framework integrates visual, speech, and eye-tracking inputs to construct a dynamic contextual state representation and introduces a novel multimodal gaze-guided mechanism, enabling real-time LLM-based understanding and response to user intent and environmental changes. Its modular architecture ensures cross-task adaptability and seamless deployment across heterogeneous robotic platforms, substantially enhancing system robustness and generalization capability. Experimental evaluation demonstrates significant improvements over conventional scripted approaches in task adaptability, user engagement, and interaction naturalness; modest gains in task execution efficiency are also observed. The framework exhibits strong potential for real-world deployment in assistive robotic applications.

Technology Category

Application Category

📝 Abstract
The rapid development of Large Language Models (LLMs) creates an exciting potential for flexible, general knowledge-driven Human-Robot Interaction (HRI) systems for assistive robots. Existing HRI systems demonstrate great progress in interpreting and following user instructions, action generation, and robot task solving. On the other hand, bi-directional, multi-modal, and context-aware support of the user in collaborative tasks still remains an open challenge. In this paper, we present a gaze- and speech-informed interface to the assistive robot, which is able to perceive the working environment from multiple vision inputs and support the dynamic user in their tasks. Our system is designed to be modular and transferable to adapt to diverse tasks and robots, and it is capable of real-time use of language-based interaction state representation and fast on board perception modules. Its development was supported by multiple public dissemination events, contributing important considerations for improved robustness and user experience. Furthermore, in two lab studies, we compare the performance and user ratings of our system with those of a traditional scripted HRI pipeline. Our findings indicate that an LLM-based approach enhances adaptability and marginally improves user engagement and task execution metrics but may produce redundant output, while a scripted pipeline is well suited for more straightforward tasks.
Problem

Research questions and friction points this paper is trying to address.

Enabling bi-directional multi-modal human-robot collaboration
Improving context-aware assistive robot interaction
Balancing adaptability and efficiency in LLM-driven HRI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaze- and speech-informed robot interface
Modular design for diverse tasks
Real-time language-based interaction representation
🔎 Similar Papers