🤖 AI Summary
Existing world models for embodied AI agents exhibit fragmentation in environmental prediction, intention recognition, and social context modeling, hindering coherent physical–social interaction. Method: We propose a unified “physical–mental” two-layer world modeling framework: a lower layer integrates multimodal perception, memory, and causal reasoning to model dynamic physical environments; an upper layer employs mental state inference to jointly represent user beliefs, goals, and social norms. The architecture enables cross-modal reasoning, long-horizon planning, and collaborative decision-making. Contribution/Results: Experiments demonstrate significant improvements in task completion rate, intention recognition accuracy, and human–robot interaction naturalness across both simulated and real-world settings. Our framework provides a scalable, human-aligned paradigm for advancing embodied intelligence toward human-like interactive capabilities.
📝 Abstract
This paper describes our research on AI agents embodied in visual, virtual or physical forms, enabling them to interact with both users and their environments. These agents, which include virtual avatars, wearable devices, and robots, are designed to perceive, learn and act within their surroundings, which makes them more similar to how humans learn and interact with the environments as compared to disembodied agents. We propose that the development of world models is central to reasoning and planning of embodied AI agents, allowing these agents to understand and predict their environment, to understand user intentions and social contexts, thereby enhancing their ability to perform complex tasks autonomously. World modeling encompasses the integration of multimodal perception, planning through reasoning for action and control, and memory to create a comprehensive understanding of the physical world. Beyond the physical world, we also propose to learn the mental world model of users to enable better human-agent collaboration.