PokeGym: A Visually-Driven Long-Horizon Benchmark for Vision-Language Models

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) struggle with long-horizon, purely vision-driven interactive tasks in complex 3D embodied environments and lack effective evaluation benchmarks. This work proposes the first 3D open-world VLM benchmark based on *The Legend of Zelda: Tears of the Kingdom*, using only raw RGB inputs and leveraging in-game memory scanning for automatic success verification. The benchmark includes 30 multi-granularity instructions spanning navigation and interaction, categorized as visually guided, step-by-step, or goal-oriented. It uniquely enables purely visual input and fully automated evaluation, revealing that physical deadlock recovery is a primary bottleneck for current models, with deadlock occurrence strongly negatively correlated with task success rates. Furthermore, the study uncovers a metacognitive disparity between strong and weak models in deadlock awareness, offering critical insights for enhancing spatial reasoning in embodied AI.
📝 Abstract
While Vision-Language Models (VLMs) have achieved remarkable progress in static visual understanding, their deployment in complex 3D embodied environments remains severely limited. Existing benchmarks suffer from four critical deficiencies: (1) passive perception tasks circumvent interactive dynamics; (2) simplified 2D environments fail to assess depth perception; (3) privileged state leakage bypasses genuine visual processing; and (4) human evaluation is prohibitively expensive and unscalable. We introduce PokeGym, a visually-driven long-horizon benchmark instantiated within Pokemon Legends: Z-A, a visually complex 3D open-world Role-Playing Game. PokeGym enforces strict code-level isolation: agents operate solely on raw RGB observations while an independent evaluator verifies success via memory scanning, ensuring pure vision-based decision-making and automated, scalable assessment. The benchmark comprises 30 tasks (30-220 steps) spanning navigation, interaction, and mixed scenarios, with three instruction granularities (Visual-Guided, Step-Guided, Goal-Only) to systematically deconstruct visual grounding, semantic reasoning, and autonomous exploration capabilities. Our evaluation reveals a key limitation of current VLMs: physical deadlock recovery, rather than high-level planning, constitutes the primary bottleneck, with deadlocks showing a strong negative correlation with task success. Furthermore, we uncover a metacognitive divergence: weaker models predominantly suffer from Unaware Deadlocks (oblivious to entrapment), whereas advanced models exhibit Aware Deadlocks (recognizing entrapment yet failing to recover). These findings highlight the need to integrate explicit spatial intuition into VLM architectures. The code and benchmark will be available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
3D embodied environments
visual understanding
benchmarking
spatial reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
embodied AI
long-horizon benchmark
visual grounding
deadlock recovery
🔎 Similar Papers
No similar papers found.