A2Eval: Agentic and Automated Evaluation for Embodied Brain

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of embodied vision-language models rely on static, human-annotated benchmarks, which suffer from high redundancy, uneven coverage, substantial annotation costs, and ranking bias. To address these limitations, this work proposes A2Eval, a novel framework that introduces, for the first time, a dual-agent mechanism comprising a data agent and an evaluation agent that collaboratively construct a balanced, compact evaluation set and synthesize executable assessment protocols. By integrating multi-agent systems, capability-dimension abstraction, and automated evaluation pipelines, A2Eval significantly enhances both the efficiency and fairness of model assessment. Experiments across 10 benchmarks and 13 models demonstrate that the method reduces the evaluation set size by 85%, lowers computational costs by 77%, accelerates evaluation by 4.6×, and achieves strong alignment with human judgments (Spearman’s ρ = 0.85, Kendall’s τ = 0.81).

Technology Category

Application Category

📝 Abstract
Current embodied VLM evaluation relies on static, expert-defined, manually annotated benchmarks that exhibit severe redundancy and coverage imbalance. This labor intensive paradigm drains computational and annotation resources, inflates costs, and distorts model rankings, ultimately stifling iterative development. To address this, we propose Agentic Automatic Evaluation (A2Eval), the first agentic framework that automates benchmark curation and evaluation through two collaborative agents. The Data Agent autonomously induces capability dimensions and assembles a balanced, compact evaluation suite, while the Eval Agent synthesizes and validates executable evaluation pipelines, enabling fully autonomous, high-fidelity assessment. Evaluated across 10 benchmarks and 13 models, A2Eval compresses evaluation suites by 85%, reduces overall computational costs by 77%, and delivers a 4.6x speedup while preserving evaluation quality. Crucially, A2Eval corrects systematic ranking biases, improves human alignment to Spearman's rho=0.85, and maintains high ranking fidelity (Kendall's tau=0.81), establishing a new standard for high-fidelity, low-cost embodied assessment. Our code and data will be public soon.
Problem

Research questions and friction points this paper is trying to address.

embodied VLM evaluation
static benchmarks
annotation redundancy
coverage imbalance
ranking bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Evaluation
Automated Benchmarking
Embodied VLM
Collaborative Agents
High-Fidelity Assessment
🔎 Similar Papers
No similar papers found.
S
Shuai Zhang
Zhejiang University, China; Westlake University, China; Beijing Innovation Center of Humanoid Robotics, China
J
Jiayu Hu
Beijing Innovation Center of Humanoid Robotics, China
Zijie Chen
Zijie Chen
Westlake University
deep learning
Z
Zeyuan Ding
Beijing Innovation Center of Humanoid Robotics, China
Yi Zhang
Yi Zhang
Beijing Institute of Technology
Yingji Zhang
Yingji Zhang
University of Manchester
Computational LinguisticsRepresentation LearningDisentanglementMulti-modal Learning
Z
Ziyi Zhou
Beijing Innovation Center of Humanoid Robotics, China; Southern University of Science and Technology, China
J
Junwei Liao
Beijing Innovation Center of Humanoid Robotics, China
S
Shengjie Zhou
Xiamen University Malaysia, Malaysia
Y
Yong Dai
Beijing Innovation Center of Humanoid Robotics, China
Zhenzhong Lan
Zhenzhong Lan
School of Engineering, Westlake University
NLPComputer VisionMultimedia
X
Xiaozhu Ju
Beijing Innovation Center of Humanoid Robotics, China