Baba Is AI: Break the Rules to Beat the Benchmark

📅 2024-07-18
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a fundamental deficiency in current multimodal large language models (MLLMs): their inability to perform rule-level abstraction and dynamic rule reconfiguration. To address this, the authors introduce BabaBench—the first benchmark explicitly designed for rule manipulation—grounded in the logic-puzzle game *Baba Is You*. It requires models to reason about real-time rewriting and combinatorial reassembly of movable textual “rule blocks” to dynamically redefine goals. Crucially, BabaBench pioneers “meta-rule manipulation” as a core evaluation dimension, integrating symbolic world modeling with procedural level generation. Experiments on GPT-4o, Gemini-1.5-Pro, and Gemini-1.5-Flash reveal near-random performance (<5% win rate) on tasks demanding creative rule restructuring. These results demonstrate that state-of-the-art MLLMs lack deep understanding of rule semantics, structural constraints, and goal plasticity—highlighting a critical gap in rule cognition modeling for general AI.

Technology Category

Application Category

📝 Abstract
Humans solve problems by following existing rules and procedures, and also by leaps of creativity to redefine those rules and objectives. To probe these abilities, we developed a new benchmark based on the game Baba Is You where an agent manipulates both objects in the environment and rules, represented by movable tiles with words written on them, to reach a specified goal and win the game. We test three state-of-the-art multi-modal large language models (OpenAI GPT-4o, Google Gemini-1.5-Pro and Gemini-1.5-Flash) and find that they fail dramatically when generalization requires that the rules of the game must be manipulated and combined.
Problem

Research questions and friction points this paper is trying to address.

Testing AI models' ability to manipulate game rules creatively
Evaluating generalization through rule recombination in Baba Is You
Assessing multi-modal LLMs' failure in dynamic rule-based environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed benchmark based on Baba Is You game
Tested multi-modal large language models
Manipulated rules and objects for generalization
🔎 Similar Papers
No similar papers found.