BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games

📅 2024-11-20
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
Current large language models (LLMs) and vision-language models (VLMs) lack systematic evaluation of core embodied intelligence capabilities—such as dynamic interaction, advanced spatial reasoning, long-horizon planning, and persistent exploration. To address this gap, we introduce BALROG, the first game-based benchmark explicitly designed for evaluating embodied agent capabilities. Built upon established reinforcement learning environments—including NetHack and MiniGrid—it spans a task difficulty gradient from sub-second to multi-year time scales. Our method introduces a fine-grained agent capability assessment framework integrating action trajectory analysis, task success rate, and policy stability across multiple dimensions. Notably, we uncover a counterintuitive phenomenon: visual input degrades model performance in certain embodied tasks. BALROG supports end-to-end LLM/VLM interaction, features human-aligned evaluation, modular scalability, and hierarchical difficulty levels. We publicly release the code, datasets, and a live leaderboard.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) and Vision Language Models (VLMs) possess extensive knowledge and exhibit promising reasoning abilities, however, they still struggle to perform well in complex, dynamic environments. Real-world tasks require handling intricate interactions, advanced spatial reasoning, long-term planning, and continuous exploration of new strategies-areas in which we lack effective methodologies for comprehensively evaluating these capabilities. To address this gap, we introduce BALROG, a novel benchmark designed to assess the agentic capabilities of LLMs and VLMs through a diverse set of challenging games. Our benchmark incorporates a range of existing reinforcement learning environments with varying levels of difficulty, including tasks that are solvable by non-expert humans in seconds to extremely challenging ones that may take years to master (e.g., the NetHack Learning Environment). We devise fine-grained metrics to measure performance and conduct an extensive evaluation of several popular open-source and closed-source LLMs and VLMs. Our findings indicate that while current models achieve partial success in the easier games, they struggle significantly with more challenging tasks. Notably, we observe severe deficiencies in vision-based decision-making, as several models perform worse when visual representations of the environments are provided. We release BALROG as an open and user-friendly benchmark to facilitate future research and development in the agentic community. Code and Leaderboard at balrogai.com.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs and VLMs in complex dynamic environments
Assessing agentic capabilities through diverse challenging games
Identifying deficiencies in vision-based decision-making tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

BALROG benchmark tests LLM/VLM agentic reasoning
Uses diverse reinforcement learning game environments
Measures performance with fine-grained metrics
🔎 Similar Papers
No similar papers found.