GVGAI-LLM: Evaluating Large Language Model Agents with Infinite Games

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM benchmarks inadequately assess spatial and logical reasoning capabilities. To address this, we propose VG-Bench—the first lightweight, extensible benchmark built upon a general-purpose video game AI framework. Methodologically, it introduces (1) a programmable game description language enabling zero-shot generation of diverse arcade-style games and levels; (2) ASCII-based scene encoding coupled with spatially anchored prompting to enable fine-grained, interpretable quantification of agent behavior; and (3) an overfitting-resistant evaluation protocol. Experiments reveal that state-of-the-art LLMs exhibit significant performance limitations on VG-Bench, and existing reasoning-augmentation techniques yield only marginal gains. VG-Bench establishes a new paradigm for LLM reasoning evaluation—open-source, fully reproducible, and mechanistically transparent—thereby offering a principled, controllable testbed for probing foundational reasoning competencies.

Technology Category

Application Category

📝 Abstract
We introduce GVGAI-LLM, a video game benchmark for evaluating the reasoning and problem-solving capabilities of large language models (LLMs). Built on the General Video Game AI framework, it features a diverse collection of arcade-style games designed to test a model's ability to handle tasks that differ from most existing LLM benchmarks. The benchmark leverages a game description language that enables rapid creation of new games and levels, helping to prevent overfitting over time. Each game scene is represented by a compact set of ASCII characters, allowing for efficient processing by language models. GVGAI-LLM defines interpretable metrics, including the meaningful step ratio, step efficiency, and overall score, to assess model behavior. Through zero-shot evaluations across a broad set of games and levels with diverse challenges and skill depth, we reveal persistent limitations of LLMs in spatial reasoning and basic planning. Current models consistently exhibit spatial and logical errors, motivating structured prompting and spatial grounding techniques. While these interventions lead to partial improvements, the benchmark remains very far from solved. GVGAI-LLM provides a reproducible testbed for advancing research on language model capabilities, with a particular emphasis on agentic behavior and contextual reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' reasoning in diverse arcade-style games
Assessing spatial reasoning and planning limitations in LLMs
Providing reproducible metrics for agentic behavior research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Game description language for rapid game creation
ASCII character representation for efficient processing
Interpretable metrics to assess model behavior
🔎 Similar Papers