Towards Multimodal Social Conversations with Robots: Using Vision-Language Models

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitation of social robots in open-domain dialogue—overreliance on unimodal language models and insufficient visual perception and cross-modal understanding. We propose a multimodal social dialogue framework integrating large language models (LLMs) with vision-language models (VLMs), enabling real-time, synergistic comprehension and generation of visual and linguistic information through environment perception, cross-modal alignment, and dynamic context modeling. Our key contribution is the first systematic investigation of adaptive VLM deployment in general social interaction scenarios, explicitly identifying core technical requirements and challenges for multimodal social dialogue. Experimental results demonstrate significant improvements in dialogue naturalness, situational consistency, and user engagement. The framework establishes a scalable theoretical foundation and practical paradigm for embodied social intelligence.

Technology Category

Application Category

📝 Abstract
Large language models have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we outline the overall needs of a multimodal system for social conversations with robots. We then argue that vision-language models are able to process this wide range of visual information in a sufficiently general manner for autonomous social robots. We describe how to adapt them to this setting, which technical challenges remain, and briefly discuss evaluation practices.
Problem

Research questions and friction points this paper is trying to address.

Enabling robots to use multiple modalities in social conversations
Adapting vision-language models for autonomous social robots
Addressing technical challenges in multimodal social interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language models enable multimodal social conversations
Adapting models for autonomous robot social skills
Addressing technical challenges in multimodal processing
🔎 Similar Papers
No similar papers found.