EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods fail to comprehensively assess multimodal large language models’ (MLLMs) holistic capabilities in realistic embodied interaction scenarios. To address this, we introduce the first interactive embodied intelligence benchmark specifically designed for MLLMs, comprising 328 diverse tasks across 125 photorealistic 3D scenes and spanning five core capability dimensions: navigation, object manipulation, social interaction, planning, and commonsense reasoning. We propose a unified simulation framework—built upon and extended from AI2-THOR and Habitat—that enables high task diversity, rich interactivity, and broad coverage, overcoming the limitations of static image- or video-based evaluations. Leveraging structured task annotations, multi-granularity metrics, and human–model comparative analysis, we systematically evaluate leading MLLMs, revealing substantial performance gaps relative to human baselines. All resources—including the dataset, simulation APIs, and end-to-end evaluation toolchain—are fully open-sourced to foster reproducible, rigorous research in embodied AI.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown significant advancements, providing a promising future for embodied agents. Existing benchmarks for evaluating MLLMs primarily utilize static images or videos, limiting assessments to non-interactive scenarios. Meanwhile, existing embodied AI benchmarks are task-specific and not diverse enough, which do not adequately evaluate the embodied capabilities of MLLMs. To address this, we propose EmbodiedEval, a comprehensive and interactive evaluation benchmark for MLLMs with embodied tasks. EmbodiedEval features 328 distinct tasks within 125 varied 3D scenes, each of which is rigorously selected and annotated. It covers a broad spectrum of existing embodied AI tasks with significantly enhanced diversity, all within a unified simulation and evaluation framework tailored for MLLMs. The tasks are organized into five categories: navigation, object interaction, social interaction, attribute question answering, and spatial question answering to assess different capabilities of the agents. We evaluated the state-of-the-art MLLMs on EmbodiedEval and found that they have a significant shortfall compared to human level on embodied tasks. Our analysis demonstrates the limitations of existing MLLMs in embodied capabilities, providing insights for their future development. We open-source all evaluation data and simulation framework at https://github.com/thunlp/EmbodiedEval.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Multimodal Information Processing
Complex Task Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

EmbodiedEval
Multimodal Large Language Models
3D Embodied Tasks
🔎 Similar Papers