π€ AI Summary
This work investigates whether vision-language models (VLMs) can enhance their decision-making capabilities through reflective learning by observing gameplay failure replays alongside expert tutorial videos. To this end, we introduce the first benchmark supporting video-based reflective learning, establish a βreflect-and-retryβ interaction paradigm, and incorporate a cognitive hierarchy taxonomy, dual-action space control, and milestone-based evaluation to systematically measure the evolution of VLM policies. Experimental results demonstrate that, without any additional training, integrating failure trajectories with tutorial videos significantly improves policy performance, thereby validating the efficacy of video reflective learning and highlighting its potential to synergize simulated reinforcement learning with supervised fine-tuning.
π Abstract
Human gameplay is a visually grounded interaction loop in which players act, reflect on failures, and watch tutorials to refine strategies. Can Vision-Language Models (VLMs) also learn from video-based reflection? We present GameVerse, a comprehensive video game benchmark that enables a reflective visual interaction loop. Moving beyond traditional fire-and-forget evaluations, it uses a novel reflect-and-retry paradigm to assess how VLMs internalize visual experience and improve policies. To facilitate systematic and scalable evaluation, we also introduce a cognitive hierarchical taxonomy spanning 15 globally popular games, dual action space for both semantic and GUI control, and milestone evaluation using advanced VLMs to quantify progress. Our experiments show that VLMs benefit from video-based reflection in varied settings, and perform best by combining failure trajectories and expert tutorials-a training-free analogue to reinforcement learning (RL) plus supervised fine-tuning (SFT).