VideoGameBench: Can Vision-Language Models complete popular video games?

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating vision-language models (VLMs) on embodied interactive tasks by introducing the first real-time, video-game-based vision-language benchmark—comprising 10 classic 1990s games—where agents receive only raw pixel inputs and high-level task instructions, without access to game APIs or auxiliary signals. Methodologically, it (1) formalizes human intuitive capabilities—perception, spatial navigation, and memory—as quantifiable in-game tasks; (2) introduces “hidden games” to rigorously assess cross-environment generalization; and (3) proposes pause-based reasoning (Lite mode) to mitigate real-time latency bottlenecks. End-to-end closed-loop control is implemented using multimodal LMs, including Gemini 2.5 Pro. Experiments reveal that under standard settings, average game progress is merely 0.48%, rising to 1.6% in Lite mode—demonstrating inference latency as the primary constraint and exposing fundamental limitations of current VLMs in embodied cognition.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have achieved strong results on coding and math benchmarks that are challenging for humans, yet their ability to perform tasks that come naturally to humans--such as perception, spatial navigation, and memory management--remains understudied. Real video games are crafted to be intuitive for humans to learn and master by leveraging innate inductive biases, making them an ideal testbed for evaluating such capabilities in VLMs. To this end, we introduce VideoGameBench, a benchmark consisting of 10 popular video games from the 1990s that VLMs directly interact with in real-time. VideoGameBench challenges models to complete entire games with access to only raw visual inputs and a high-level description of objectives and controls, a significant departure from existing setups that rely on game-specific scaffolding and auxiliary information. We keep three of the games secret to encourage solutions that generalize to unseen environments. Our experiments show that frontier vision-language models struggle to progress beyond the beginning of each game. We find inference latency to be a major limitation of frontier models in the real-time setting; therefore, we introduce VideoGameBench Lite, a setting where the game pauses while waiting for the LM's next action. The best performing model, Gemini 2.5 Pro, completes only 0.48% of VideoGameBench and 1.6% of VideoGameBench Lite. We hope that the formalization of the human skills mentioned above into this benchmark motivates progress in these research directions.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLMs' ability to perform human-like tasks in video games
Assessing VLMs' real-time interaction with raw visual inputs in games
Measuring generalization of VLMs to unseen gaming environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time interaction with 10 classic games
Raw visual inputs and high-level objectives
Secret games to test generalization
🔎 Similar Papers
No similar papers found.