GameVerse: Can Vision-Language Models Learn from Video-based Reflection?

πŸ“… 2026-03-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates whether vision-language models (VLMs) can enhance their decision-making capabilities through reflective learning by observing gameplay failure replays alongside expert tutorial videos. To this end, we introduce the first benchmark supporting video-based reflective learning, establish a β€œreflect-and-retry” interaction paradigm, and incorporate a cognitive hierarchy taxonomy, dual-action space control, and milestone-based evaluation to systematically measure the evolution of VLM policies. Experimental results demonstrate that, without any additional training, integrating failure trajectories with tutorial videos significantly improves policy performance, thereby validating the efficacy of video reflective learning and highlighting its potential to synergize simulated reinforcement learning with supervised fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
Human gameplay is a visually grounded interaction loop in which players act, reflect on failures, and watch tutorials to refine strategies. Can Vision-Language Models (VLMs) also learn from video-based reflection? We present GameVerse, a comprehensive video game benchmark that enables a reflective visual interaction loop. Moving beyond traditional fire-and-forget evaluations, it uses a novel reflect-and-retry paradigm to assess how VLMs internalize visual experience and improve policies. To facilitate systematic and scalable evaluation, we also introduce a cognitive hierarchical taxonomy spanning 15 globally popular games, dual action space for both semantic and GUI control, and milestone evaluation using advanced VLMs to quantify progress. Our experiments show that VLMs benefit from video-based reflection in varied settings, and perform best by combining failure trajectories and expert tutorials-a training-free analogue to reinforcement learning (RL) plus supervised fine-tuning (SFT).
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
video-based reflection
gameplay learning
reflective learning
visual interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

reflect-and-retry
vision-language models
video-based reflection
cognitive hierarchical taxonomy
training-free learning
πŸ”Ž Similar Papers
No similar papers found.
K
Kuan Zhang
College of AI, Tsinghua University, China
D
Dongchen Liu
College of AI, Tsinghua University, China
Q
Qiyue Zhao
College of AI, Tsinghua University, China
J
Jinkun Hou
College of AI, Tsinghua University, China
Xinran Zhang
Xinran Zhang
University of Science and Technology of China
SLAMNeRF3DGS
Q
Qinlei Xie
College of AI, Tsinghua University, China
Miao Liu
Miao Liu
Assistant Professor at Tsinghua University, College of AI
Computer VisionDeep LearningAugmented RealityGenerative Model
Y
Yiming Li
College of AI, Tsinghua University, China