BEAR: Benchmarking and Enhancing Multimodal Language Models for Atomic Embodied Capabilities

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) lack systematic evaluation of their potential as embodied agents, as mainstream benchmarks emphasize isolated capabilities—such as planning or spatial understanding—while neglecting fine-grained, atomic embodied skills. Method: We introduce BEAR, the first fine-grained, multi-task benchmark covering 14 embodied domains, establishing the first systematic evaluation framework for three foundational atomic capabilities: perception, comprehension, and interaction. We further propose BEAR-Agent, a novel agent architecture integrating pretrained vision models with multimodal dialogue mechanisms to enhance visual perception, 3D understanding, and hierarchical planning. Contribution/Results: Evaluating 20 state-of-the-art MLLMs reveals pervasive weaknesses in cross-modal reasoning and low-level perception. BEAR-Agent achieves a 9.12% absolute (17.5% relative) performance gain on GPT-5 and successfully transfers to simulated embodied tasks.

Technology Category

Application Category

📝 Abstract
Embodied capabilities refer to a suite of fundamental abilities for an agent to perceive, comprehend, and interact with the physical world. While multimodal large language models (MLLMs) show promise as embodied agents, a thorough and systematic evaluation of their embodied capabilities remains underexplored, as existing benchmarks primarily focus on specific domains such as planning or spatial understanding. To bridge this gap, we introduce BEAR, a comprehensive and fine-grained benchmark that evaluates MLLMs on atomic embodied capabilities. BEAR comprises 4,469 interleaved image-video-text entries across 14 domains in 6 categories, including tasks from low-level pointing, trajectory understanding, spatial reasoning, to high-level planning. Extensive evaluation results of 20 representative MLLMs reveal their persistent limitations across all domains of embodied capabilities. To tackle the shortfall, we propose BEAR-Agent, a multimodal conversable agent that integrates pretrained vision models to strengthen MLLM perception, 3D understanding, and planning capabilities. It substantially enhances MLLM performance across diverse embodied capabilities on BEAR, yielding a 9.12% absolute gain and a relative improvement of 17.5% on GPT-5. Furthermore, our experiments indicate that improving MLLM embodied capabilities can benefit embodied tasks in simulated environments. Project website: https://bear-official66.github.io/
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal language models' embodied capabilities systematically and comprehensively
Addressing limitations in perception, spatial reasoning, and planning abilities of MLLMs
Developing enhanced agents to improve physical world interaction performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates pretrained vision models for enhanced perception
Strengthens 3D understanding and planning capabilities
Multimodal conversable agent for diverse embodied tasks
🔎 Similar Papers
No similar papers found.
Y
Yu Qi
Northeastern University
Haibo Zhao
Haibo Zhao
Northeastern University
Ziyu Guo
Ziyu Guo
The Chinese University of Hong Kong
Multi-modality LearningLLM/VLMs3D Vision
S
Siyuan Ma
Westlake University, The Chinese University of Hong Kong
Ziyan Chen
Ziyan Chen
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Generative AILow Level Vision
Y
Yaokun Han
The Chinese University of Hong Kong
Renrui Zhang
Renrui Zhang
Seed ByteDance & MMLab & PKU
Large Multimodal ModelGenerative ModelEmbodied AI
Z
Zitiantao Lin
Northeastern University
Shiji Xin
Shiji Xin
Harvard University
Y
Yijian Huang
Northeastern University
K
Kai Cheng
Purdue University
P
Peiheng Wang
Peking University
J
Jiazheng Liu
Peking University
J
Jiayi Zhang
Northeastern University
Y
Yizhe Zhu
Northeastern University
Wenqing Wang
Wenqing Wang
Postdoctoral researcher, Iowa State University
Dynamic ModelingHierarchical ControlModel Predictive ControlStochastic Control
Y
Yiran Qin
University of Oxford
Xupeng Zhu
Xupeng Zhu
PhD student at Northeastern University
RoboticsGeometric Deep LearningImitation LearningReinforcement learning
Haojie Huang
Haojie Huang
Northeastern Univeristy
roboticslearningperception
L
Lawson L. S. Wong
Northeastern University