Evaluating Multimodal Large Language Models with Daily Composite Tasks in Home Environments

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability of multimodal large language models (MLLMs) to perform daily composite tasks in home environments, thereby quantifying their gap from artificial general intelligence (AGI). To this end, we propose the first embodied-agent-oriented composite task evaluation framework, inspired by early childhood development, comprising three task categories: object understanding, spatial intelligence, and social activities. We systematically evaluate 17 state-of-the-art closed- and open-source MLLMs within a dynamic home simulation environment. Results reveal consistently poor performance across all categories, highlighting critical deficiencies in cross-modal coordination, embodied reasoning, and social interaction. The framework not only confirms a substantial capability gap between current MLLMs and AGI but also establishes a reproducible, extensible benchmark suite. It provides a methodological foundation for future research in embodied intelligence and multimodal evaluation, enabling rigorous, developmentally grounded assessment of multimodal reasoning in realistic domestic settings.

Technology Category

Application Category

📝 Abstract
A key feature differentiating artificial general intelligence (AGI) from traditional AI is that AGI can perform composite tasks that require a wide range of capabilities. Although embodied agents powered by multimodal large language models (MLLMs) offer rich perceptual and interactive capabilities, it remains largely unexplored whether they can solve composite tasks. In the current work, we designed a set of composite tasks inspired by common daily activities observed in early childhood development. Within a dynamic and simulated home environment, these tasks span three core domains: object understanding, spatial intelligence, and social activity. We evaluated 17 leading proprietary and open-source MLLMs on these tasks. The results consistently showed poor performance across all three domains, indicating a substantial gap between current capabilities and general intelligence requirements. Together, our tasks offer a preliminary framework for evaluating the general capabilities of embodied agents, marking an early but significant step toward the development of embodied MLLMs and their real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' ability to solve daily composite tasks
Assessing performance in object, spatial, and social domains
Identifying the gap between current MLLMs and AGI requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated MLLMs on daily composite tasks
Used simulated home environment for testing
Assessed object, spatial, and social intelligence
Z
Zhenliang Zhang
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence, Beijing, China
Yuxi Wang
Yuxi Wang
Ocean University of China
Computer Vision
H
Hongzhao Xie
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence, Beijing, China
S
Shiyun Zhao
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence, Beijing, China
M
Mingyuan Liu
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence, Beijing, China
Yujie Lu
Yujie Lu
Research Scientist, Meta Superintelligence Lab
Vision and Language ModelLarge Language ModelLanguage Grounding
Xinyi He
Xinyi He
Xi'an Jiaotong University
Data analyticsNatural Language Processing
Z
Zhenku Cheng
School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
Yujia Peng
Yujia Peng
Peking University
depression and anxietycomputational psychiatrycausal perceptionsocial cognitionaction recognition