ZEBRAARENA: A Diagnostic Simulation Environment for Studying Reasoning-Action Coupling in Tool-Augmented LLMs

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks struggle to effectively evaluate the synergy between multi-step reasoning and external actions in tool-augmented large language models, often confounded by environmental complexity, memorized knowledge, or data contamination. This work proposes the first diagnostic environment specifically designed to disentangle reasoning from action, leveraging procedurally generated tasks, minimal-knowledge design, and controllable difficulty to compel models to rely on precise tool invocation and deductive reasoning. The environment enforces a unique solution, defines a theoretically optimal query count, and features an interpretable tool-reasoning interface, thereby isolating the effects of memorization and generalization. Experiments reveal that state-of-the-art models—including GPT-5 and Gemini 2.5 Pro—achieve only ~60% accuracy on challenging tasks and exceed the theoretical optimum in tool calls by 70%–270%, exposing significant deficiencies in efficient reasoning-action coordination.

Technology Category

Application Category

📝 Abstract
Tool-augmented large language models (LLMs) must tightly couple multi-step reasoning with external actions, yet existing benchmarks often confound this interplay with complex environment dynamics, memorized knowledge or dataset contamination. In this paper, we introduce ZebraArena, a procedurally generated diagnostic environment for studying reasoning-action coupling in tool-augmented LLMs, with controllable difficulty and a knowledge-minimal design, which limits gains from memorization or dataset contamination. Each task in ZebraArena requires a set of critical information which is available only through targeted tool use, yielding an interpretable interface between external information acquisition and deductive reasoning. This design provides deterministic evaluation via unique solutions, and a theoretical optimal query count for measuring efficient tool use. We show that ZebraArena requires a combination of in-depth reasoning and accurate external tool calling, which remains a challenge as frontier reasoning models such as GPT-5 and Gemini 2.5 Pro only achieves 60% accuracy on the hard instances. We also observe a persistent gaps between theoretical optimality and practical tool usage. For example, GPT-5 uses 70-270% more tool calls than the theoretical optimum. We highlight the key findings in our evaluation, and hope ZebraArena stimulates further research on the interplay between internal reasoning and external action.
Problem

Research questions and friction points this paper is trying to address.

reasoning-action coupling
tool-augmented LLMs
diagnostic benchmark
external tool use
multi-step reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning-action coupling
tool-augmented LLMs
procedurally generated environment
diagnostic benchmark
optimal tool usage
🔎 Similar Papers