Embodied Arena: A Comprehensive, Unified, and Evolving Evaluation Platform for Embodied AI

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied AI research faces three critical bottlenecks: ill-defined core capabilities, absence of cross-benchmark evaluation frameworks, and insufficient automation and scalability in embodied data generation. To address these, we introduce the first systematic, continuously evolving unified evaluation platform for embodied AI. Our method establishes a three-tier capability taxonomy—encompassing perception, reasoning, and task execution—and designs a standardized cross-benchmark evaluation framework, complemented by a real-time dual-perspective leaderboard. Innovatively, we deploy an LLM-driven automated data generation pipeline, integrating 22 diverse benchmarks and over 30 state-of-the-art models to comprehensively support 2D/3D embodied question answering, navigation, and task planning. The platform has evaluated 30+ models from 20+ institutions, yielding nine key empirical findings. This work significantly enhances comparability, reproducibility, and scalability in embodied AI evaluation.

Technology Category

Application Category

📝 Abstract
Embodied AI development significantly lags behind large foundation models due to three critical challenges: (1) lack of systematic understanding of core capabilities needed for Embodied AI, making research lack clear objectives; (2) absence of unified and standardized evaluation systems, rendering cross-benchmark evaluation infeasible; and (3) underdeveloped automated and scalable acquisition methods for embodied data, creating critical bottlenecks for model scaling. To address these obstacles, we present Embodied Arena, a comprehensive, unified, and evolving evaluation platform for Embodied AI. Our platform establishes a systematic embodied capability taxonomy spanning three levels (perception, reasoning, task execution), seven core capabilities, and 25 fine-grained dimensions, enabling unified evaluation with systematic research objectives. We introduce a standardized evaluation system built upon unified infrastructure supporting flexible integration of 22 diverse benchmarks across three domains (2D/3D Embodied Q&A, Navigation, Task Planning) and 30+ advanced models from 20+ worldwide institutes. Additionally, we develop a novel LLM-driven automated generation pipeline ensuring scalable embodied evaluation data with continuous evolution for diversity and comprehensiveness. Embodied Arena publishes three real-time leaderboards (Embodied Q&A, Navigation, Task Planning) with dual perspectives (benchmark view and capability view), providing comprehensive overviews of advanced model capabilities. Especially, we present nine findings summarized from the evaluation results on the leaderboards of Embodied Arena. This helps to establish clear research veins and pinpoint critical research problems, thereby driving forward progress in the field of Embodied AI.
Problem

Research questions and friction points this paper is trying to address.

Lack systematic understanding of core embodied AI capabilities
Absence unified standardized evaluation systems across benchmarks
Underdeveloped automated scalable embodied data acquisition methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic embodied capability taxonomy for unified evaluation
Standardized evaluation system integrating diverse benchmarks and models
LLM-driven automated generation pipeline for scalable data
🔎 Similar Papers
No similar papers found.
Fei Ni
Fei Ni
Imperial College London
Reinforcement LearningEmbodied AI
M
Min Zhang
eta Huawei Noah’s Ark Lab
P
Pengyi Li
lambda Sun Yat-sen University
Yifu Yuan
Yifu Yuan
Tianjin University
Reinforcement Learning
Lingfeng Zhang
Lingfeng Zhang
PhD student at Tsinghua University
embodied ai
Y
Yuecheng Liu
mu Peking University
P
Peilong Han
u Tsinghua University
Longxin Kou
Longxin Kou
Master of Software Engineering, Tianjin University
Embodied AIvideo understandinglarge language modelMultimodal Large Model
S
Shaojin Ma
heta Nanjing University
J
Jinbin Qiao
kappa Institute of Computing Technology, Chinese Academy of Sciences
D
David Gamaliel Arcos Bravo
sigma Imperial College London
Y
Yuening Wang
xi King’s College London
X
Xiao Hu
epsilon University College London
Z
Zhanguang Zhang
pi TU Darmstadt
X
Xianze Yao
lambda Sun Yat-sen University
Y
Yutong Li
gamma Shanghai Jiao Tong University
Z
Zhao Zhang
u Tsinghua University
Ying Wen
Ying Wen
Associate Professor, Shanghai Jiao Tong University
Multi-Agent LearningReinforcement Learning
Ying-Cong Chen
Ying-Cong Chen
Hong Kong University of Science and Technology (Guangzhou)
Computer Vision and Pattern Recognition
Xiaodan Liang
Xiaodan Liang
Professor of Computer Science, Sun Yat-sen University, MBZUAI, CMU, NUS
Computer visionEmbodied AIMachine learning
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis
B
Bin He
alpha Tianjin University
Haitham Bou-Ammar
Haitham Bou-Ammar
RL-Team Leader, BO-Team Leader, MAS-Team Leader Huawei Noah's Ark Lab, H. Assistant Professor @ UCL
Machine LearningReinforcement LearningOptimisationVariational Inference
H
He Wang
gamma Shanghai Jiao Tong University
Huazhe Xu
Huazhe Xu
Tsinghua University
Embodied AIReinforcement LearningComputer VisionDeep Learning