🤖 AI Summary
Traditional robot manipulation evaluation relies on fixed benchmarks designed by experts, which lack flexibility in scaling task intent and success criteria, thereby limiting evaluation diversity and accessibility. This work proposes the first language-driven evaluation framework for robotics, enabling users to author executable manipulation tasks in natural language within structured physical environments. These task descriptions are automatically compiled into reproducible specifications comprising asset definitions, initialization distributions, and success predicates. By introducing natural language as the primary interface for task authoring, the approach supports the generation of task families with controllable semantic and behavioral variations, democratizing evaluation. User studies demonstrate that this interface is significantly more usable and imposes lower cognitive load than programming-based baselines. Large-scale evaluations further reveal critical generalization gaps in existing policies across diverse tasks, with task diversity shown to scale continuously with contributor diversity.
📝 Abstract
Evaluation of robotic manipulation systems has largely relied on fixed benchmarks authored by a small number of experts, where task instances, constraints, and success criteria are predefined and difficult to extend. This paradigm limits who can shape evaluation and obscures how policies respond to user-authored variations in task intent, constraints, and notions of success. We argue that evaluating modern manipulation policies requires reframing evaluation as a language-driven process over structured physical domains. We present RoboPlayground, a framework that enables users to author executable manipulation tasks using natural language within a structured physical domain. Natural language instructions are compiled into reproducible task specifications with explicit asset definitions, initialization distributions, and success predicates. Each instruction defines a structured family of related tasks, enabling controlled semantic and behavioral variation while preserving executability and comparability. We instantiate RoboPlayground in a structured block manipulation domain and evaluate it along three axes. A user study shows that the language-driven interface is easier to use and imposes lower cognitive workload than programming-based and code-assist baselines. Evaluating learned policies on language-defined task families reveals generalization failures that are not apparent under fixed benchmark evaluations. Finally, we show that task diversity scales with contributor diversity rather than task count alone, enabling evaluation spaces to grow continuously through crowd-authored contributions. Project Page: https://roboplayground.github.io