🤖 AI Summary
Existing multimodal large language models (MLLMs) lack systematic evaluation of their potential as embodied agents, as mainstream benchmarks emphasize isolated capabilities—such as planning or spatial understanding—while neglecting fine-grained, atomic embodied skills.
Method: We introduce BEAR, the first fine-grained, multi-task benchmark covering 14 embodied domains, establishing the first systematic evaluation framework for three foundational atomic capabilities: perception, comprehension, and interaction. We further propose BEAR-Agent, a novel agent architecture integrating pretrained vision models with multimodal dialogue mechanisms to enhance visual perception, 3D understanding, and hierarchical planning.
Contribution/Results: Evaluating 20 state-of-the-art MLLMs reveals pervasive weaknesses in cross-modal reasoning and low-level perception. BEAR-Agent achieves a 9.12% absolute (17.5% relative) performance gain on GPT-5 and successfully transfers to simulated embodied tasks.
📝 Abstract
Embodied capabilities refer to a suite of fundamental abilities for an agent to perceive, comprehend, and interact with the physical world. While multimodal large language models (MLLMs) show promise as embodied agents, a thorough and systematic evaluation of their embodied capabilities remains underexplored, as existing benchmarks primarily focus on specific domains such as planning or spatial understanding. To bridge this gap, we introduce BEAR, a comprehensive and fine-grained benchmark that evaluates MLLMs on atomic embodied capabilities. BEAR comprises 4,469 interleaved image-video-text entries across 14 domains in 6 categories, including tasks from low-level pointing, trajectory understanding, spatial reasoning, to high-level planning. Extensive evaluation results of 20 representative MLLMs reveal their persistent limitations across all domains of embodied capabilities. To tackle the shortfall, we propose BEAR-Agent, a multimodal conversable agent that integrates pretrained vision models to strengthen MLLM perception, 3D understanding, and planning capabilities. It substantially enhances MLLM performance across diverse embodied capabilities on BEAR, yielding a 9.12% absolute gain and a relative improvement of 17.5% on GPT-5. Furthermore, our experiments indicate that improving MLLM embodied capabilities can benefit embodied tasks in simulated environments. Project website: https://bear-official66.github.io/