🤖 AI Summary
This work addresses the lack of systematic benchmarks in robotic learning that hinder comprehensive evaluation of models on complex, diverse behaviors. To this end, we propose GM-100, an embodied intelligence evaluation benchmark comprising 100 meticulously designed tasks, introducing for the first time an “Olympic”-style holistic assessment framework. The tasks are grounded in human manipulation primitives and object affordance analysis, spanning a broad spectrum of human-robot interaction scenarios—including long-tail cases—and are supported by multi-platform trajectory data to enable systematic evaluation of vision-language-action (VLA) models. Experiments demonstrate that GM-100 is both executable and challenging, effectively differentiating the performance of state-of-the-art VLA models, thereby establishing a high-quality, unified, and diverse evaluation standard for embodied AI research.
📝 Abstract
Recently, with the rapid development of robot learning and imitation learning, numerous datasets and methods have emerged. However, these datasets and their task designs often lack systematic consideration and principles. This raises important questions: Do the current datasets and task designs truly advance the capabilities of robotic agents? Do evaluations on a few common tasks accurately reflect the differentiated performance of various methods proposed by different teams and evaluated on different tasks? To address these issues, we introduce the Great March 100 (\textbf{GM-100}) as the first step towards a robot learning Olympics. GM-100 consists of 100 carefully designed tasks that cover a wide range of interactions and long-tail behaviors, aiming to provide a diverse and challenging set of tasks to comprehensively evaluate the capabilities of robotic agents and promote diversity and complexity in robot dataset task designs. These tasks are developed through systematic analysis and expansion of existing task designs, combined with insights from human-object interaction primitives and object affordances. We collect a large amount of trajectory data on different robotic platforms and evaluate several baseline models. Experimental results demonstrate that the GM-100 tasks are 1) feasible to execute and 2) sufficiently challenging to effectively differentiate the performance of current VLA models. Our data and code are available at https://rhos.ai/research/gm-100.