🤖 AI Summary
Existing evaluations of embodied world models predominantly emphasize perceptual fidelity while neglecting their functional utility in downstream tasks, and lack a unified, multidimensional assessment framework. To address this gap, this work proposes WorldArena, the first benchmark that integrates both perceptual and functional evaluation into a cohesive framework. It encompasses 16 automated perceptual metrics, three embodied agent tasks, human subjective assessments, and modules for policy and action planning, along with a novel composite metric, EWMScore. Experiments across 14 representative models reveal a significant perception-functionality gap—high visual quality does not necessarily translate to strong task performance. The benchmark platform and leaderboard are publicly released at https://worldarena.ai to advance the development of functionally capable embodied world models.
📝 Abstract
While world models have emerged as a cornerstone of embodied intelligence by enabling agents to reason about environmental dynamics through action-conditioned prediction, their evaluation remains fragmented. Current evaluation of embodied world models has largely focused on perceptual fidelity (e.g., video generation quality), overlooking the functional utility of these models in downstream decision-making tasks. In this work, we introduce WorldArena, a unified benchmark designed to systematically evaluate embodied world models across both perceptual and functional dimensions. WorldArena assesses models through three dimensions: video perception quality, measured with 16 metrics across six sub-dimensions; embodied task functionality, which evaluates world models as data engines, policy evaluators, and action planners integrating with subjective human evaluation. Furthermore, we propose EWMScore, a holistic metric integrating multi-dimensional performance into a single interpretable index. Through extensive experiments on 14 representative models, we reveal a significant perception-functionality gap, showing that high visual quality does not necessarily translate into strong embodied task capability. WorldArena benchmark with the public leaderboard is released at https://world-arena.ai, providing a framework for tracking progress toward truly functional world models in embodied AI.