🤖 AI Summary
Existing robotic benchmarks struggle to capture the vast diversity of scene layouts, object geometries, and task specifications encountered in real-world environments, thereby limiting the evaluation of policy generalization in long-tail everyday scenarios. To address this, we propose MolmoSpaces-Bench—a large-scale, open, and simulator-agnostic evaluation ecosystem that integrates over 230,000 diverse indoor scenes and 130,000 meticulously annotated objects, including 48,000 manipulable items with 42 million stable grasp poses. The benchmark supports navigation, static and dynamic manipulation, and cross-room long-horizon tasks. Built on procedural generation, a multi-simulator compatible architecture (MuJoCo, Isaac, ManiSkill), and standardized interfaces, it ensures high reproducibility and demonstrates strong sim-to-real correlation (R=0.96, ρ=0.98), effectively validating zero-shot policy performance and revealing the critical impact of prompt phrasing, initial poses, and occlusion on task success.
📝 Abstract
Deploying robots at scale demands robustness to the long tail of everyday situations. The countless variations in scene layout, object geometry, and task specifications that characterize real environments are vast and underrepresented in existing robot benchmarks. Measuring this level of generalization requires infrastructure at a scale and diversity that physical evaluation alone cannot provide. We introduce MolmoSpaces, a fully open ecosystem to support large-scale benchmarking of robot policies. MolmoSpaces consists of over 230k diverse indoor environments, ranging from handcrafted household scenes to procedurally generated multiroom houses, populated with 130k richly annotated object assets, including 48k manipulable objects with 42M stable grasps. Crucially, these environments are simulator-agnostic, supporting popular options such as MuJoCo, Isaac, and ManiSkill. The ecosystem supports the full spectrum of embodied tasks: static and mobile manipulation, navigation, and multiroom long-horizon tasks requiring coordinated perception, planning, and interaction across entire indoor environments. We also design MolmoSpaces-Bench, a benchmark suite of 8 tasks in which robots interact with our diverse scenes and richly annotated objects. Our experiments show MolmoSpaces-Bench exhibits strong sim-to-real correlation (R = 0.96, \r{ho} = 0.98), confirm newer and stronger zero-shot policies outperform earlier versions in our benchmarks, and identify key sensitivities to prompt phrasing, initial joint positions, and camera occlusion. Through MolmoSpaces and its open-source assets and tooling, we provide a foundation for scalable data generation, policy training, and benchmark creation for robot learning research.