🤖 AI Summary
This study addresses the limitation of social robots in open-domain dialogue—overreliance on unimodal language models and insufficient visual perception and cross-modal understanding. We propose a multimodal social dialogue framework integrating large language models (LLMs) with vision-language models (VLMs), enabling real-time, synergistic comprehension and generation of visual and linguistic information through environment perception, cross-modal alignment, and dynamic context modeling. Our key contribution is the first systematic investigation of adaptive VLM deployment in general social interaction scenarios, explicitly identifying core technical requirements and challenges for multimodal social dialogue. Experimental results demonstrate significant improvements in dialogue naturalness, situational consistency, and user engagement. The framework establishes a scalable theoretical foundation and practical paradigm for embodied social intelligence.
📝 Abstract
Large language models have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we outline the overall needs of a multimodal system for social conversations with robots. We then argue that vision-language models are able to process this wide range of visual information in a sufficiently general manner for autonomous social robots. We describe how to adapt them to this setting, which technical challenges remain, and briefly discuss evaluation practices.