To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions

📅 2024-03-19
🏛️ IEEE/RJS International Conference on Intelligent RObots and Systems
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of robotic physical assistance disrupting natural human interaction in multi-person collaborative scenarios. We propose an autonomous decision-making framework integrating multimodal perception (vision and speech), large language model (LLM)-driven contextual understanding and commonsense reasoning, human-robot interaction state modeling, and real-time behavioral inhibition strategies. Our key contribution is the first introduction of an “active silence” mechanism: the robot dynamically assesses the necessity of intervention, enabling human-like social awareness—providing precise physical support when required while proactively refraining from interference otherwise—thus transcending conventional command-following paradigms. Evaluated across diverse group tasks, the framework achieves 89.2% assistance accuracy and only a 3.1% unnecessary intervention rate, significantly enhancing collaboration fluency and human trust in the robot.

Technology Category

Application Category

📝 Abstract
How can a robot provide unobtrusive physical support within a group of humans? We present Attentive Support, a novel interaction concept for robots to support a group of humans. It combines scene perception, dialogue acquisition, situation understanding, and behavior generation with the common-sense reasoning capabilities of Large Language Models (LLMs). In addition to following user instructions, Attentive Support is capable of deciding when and how to support the humans, and when to remain silent to not disturb the group. With a diverse set of scenarios, we show and evaluate the robot’s attentive behavior, which supports and helps the humans when required, while not disturbing if no help is needed.
Problem

Research questions and friction points this paper is trying to address.

How robots can provide unobtrusive physical support in human groups
Combining scene perception and LLMs for human-robot interaction
Deciding when to assist or remain silent in group dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based scene perception and dialogue acquisition
Situation understanding with common-sense reasoning
Autonomous decision-making for unobtrusive support
🔎 Similar Papers
No similar papers found.