🤖 AI Summary
This study investigates the capability of multimodal large language models (MLLMs) to perform daily composite tasks in home environments, thereby quantifying their gap from artificial general intelligence (AGI). To this end, we propose the first embodied-agent-oriented composite task evaluation framework, inspired by early childhood development, comprising three task categories: object understanding, spatial intelligence, and social activities. We systematically evaluate 17 state-of-the-art closed- and open-source MLLMs within a dynamic home simulation environment. Results reveal consistently poor performance across all categories, highlighting critical deficiencies in cross-modal coordination, embodied reasoning, and social interaction. The framework not only confirms a substantial capability gap between current MLLMs and AGI but also establishes a reproducible, extensible benchmark suite. It provides a methodological foundation for future research in embodied intelligence and multimodal evaluation, enabling rigorous, developmentally grounded assessment of multimodal reasoning in realistic domestic settings.
📝 Abstract
A key feature differentiating artificial general intelligence (AGI) from traditional AI is that AGI can perform composite tasks that require a wide range of capabilities. Although embodied agents powered by multimodal large language models (MLLMs) offer rich perceptual and interactive capabilities, it remains largely unexplored whether they can solve composite tasks. In the current work, we designed a set of composite tasks inspired by common daily activities observed in early childhood development. Within a dynamic and simulated home environment, these tasks span three core domains: object understanding, spatial intelligence, and social activity. We evaluated 17 leading proprietary and open-source MLLMs on these tasks. The results consistently showed poor performance across all three domains, indicating a substantial gap between current capabilities and general intelligence requirements. Together, our tasks offer a preliminary framework for evaluating the general capabilities of embodied agents, marking an early but significant step toward the development of embodied MLLMs and their real-world deployment.