ClawArena: Benchmarking AI Agents in Evolving Information Environments

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI benchmarks struggle to evaluate agents’ ability to maintain accurate beliefs in dynamic, multi-source, and contradictory information environments. To address this gap, this work proposes ClawArena—a comprehensive evaluation benchmark spanning eight domains and 64 scenarios, which establishes the first assessment framework tailored to evolving informational contexts. The framework introduces a 14-category problem taxonomy and employs two evaluation formats: multiple-choice and executable verification. By simulating noisy, partial, and conflicting multi-channel information streams alongside phased belief updates, ClawArena targets three core challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization. Experiments reveal that both model capability (accounting for a 15.4% performance spread) and framework design (9.2%) significantly impact results; self-evolving skill architectures partially mitigate model disparities; and the difficulty of belief revision hinges more on the update strategy than on whether an update occurs.
📝 Abstract
AI agents deployed as persistent assistants must maintain correct beliefs as their information environment evolves. In practice, evidence is scattered across heterogeneous sources that often contradict one another, new information can invalidate earlier conclusions, and user preferences surface through corrections rather than explicit instructions. Existing benchmarks largely assume static, single-authority settings and do not evaluate whether agents can keep up with this complexity. We introduce ClawArena, a benchmark for evaluating AI agents in evolving information environments. Each scenario maintains a complete hidden ground truth while exposing the agent only to noisy, partial, and sometimes contradictory traces across multi-channel sessions, workspace files, and staged updates. Evaluation is organized around three coupled challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, whose interactions yield a 14-category question taxonomy. Two question formats, multi-choice (set-selection) and shell-based executable checks, test both reasoning and workspace grounding. The current release contains 64 scenarios across 8 professional domains, totaling 1{,}879 evaluation rounds and 365 dynamic updates. Experiments on five agent frameworks and five language models show that both model capability (15.4% range) and framework design (9.2%) substantially affect performance, that self-evolving skill frameworks can partially close model-capability gaps, and that belief revision difficulty is determined by update design strategy rather than the mere presence of updates. Code is available at https://github.com/aiming-lab/ClawArena.
Problem

Research questions and friction points this paper is trying to address.

evolving information environments
belief revision
multi-source conflict
implicit personalization
AI agent benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

evolving information environments
dynamic belief revision
multi-source conflict reasoning
implicit personalization
agent benchmarking
🔎 Similar Papers
No similar papers found.