A Multimodal Framework for Human-Multi-Agent Interaction

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing systems struggle to unify multimodal perception, embodied expression, and multi-agent collaborative decision-making within a shared physical space, limiting natural and scalable human–multi-robot interaction. This work proposes a unified framework for human–multi-agent interaction that integrates multimodal perception, large language model (LLM)-driven embodied planning, and a centralized coordination mechanism within a multi-agent architecture. The mechanism dynamically manages speaking turns and behavioral participation to effectively prevent conflicts and enable coordinated strategies across speech, gesture, gaze, and locomotion. Evaluated on a dual-humanoid robot platform, the system demonstrates robust cross-agent collaborative reasoning and embodied responsiveness, significantly enhancing the naturalness and scalability of human–robot interaction.

Technology Category

Application Category

📝 Abstract
Human-robot interaction is increasingly moving toward multi-robot, socially grounded environments. Existing systems struggle to integrate multimodal perception, embodied expression, and coordinated decision-making in a unified framework. This limits natural and scalable interaction in shared physical spaces. We address this gap by introducing a multimodal framework for human-multi-agent interaction in which each robot operates as an autonomous cognitive agent with integrated multimodal perception and Large Language Model (LLM)-driven planning grounded in embodiment. At the team level, a centralized coordination mechanism regulates turn-taking and agent participation to prevent overlapping speech and conflicting actions. Implemented on two humanoid robots, our framework enables coherent multi-agent interaction through interaction policies that combine speech, gesture, gaze, and locomotion. Representative interaction runs demonstrate coordinated multimodal reasoning across agents and grounded embodied responses. Future work will focus on larger-scale user studies and deeper exploration of socially grounded multi-agent interaction dynamics.
Problem

Research questions and friction points this paper is trying to address.

human-multi-agent interaction
multimodal perception
embodied expression
coordinated decision-making
socially grounded interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal interaction
human-multi-agent interaction
embodied cognition
LLM-driven planning
centralized coordination
🔎 Similar Papers
No similar papers found.