CATArena: Evaluation of LLM Agents through Iterative Tournament Competitions

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-agent evaluation benchmarks rely on static end-to-end testing, failing to capture learning capabilities—such as self-improvement and peer learning—while suffering from score saturation and heavy dependence on manual annotation. To address these limitations, we propose the first adversarial evaluation framework centered explicitly on learning capacity. Our method establishes a dynamic博弈 environment via iterative tournaments in open-rule board or card games, where agents evolve strategies through multi-round competition and real-time feedback. The framework enables automated, unbounded-score assessment and integrates customizable strategy encoding with production-grade code-based agents. Experiments demonstrate significantly enhanced evaluation sensitivity: it robustly differentiates agents of varying scales across strategic reasoning, adaptability, and long-term evolutionary capability—effectively overcoming fundamental constraints of conventional benchmarks.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) agents have evolved from basic text generation to autonomously completing complex tasks through interaction with external tools. However, current benchmarks mainly assess end-to-end performance in fixed scenarios, restricting evaluation to specific skills and suffering from score saturation and growing dependence on expert annotation as agent capabilities improve. In this work, we emphasize the importance of learning ability, including both self-improvement and peer-learning, as a core driver for agent evolution toward human-level intelligence. We propose an iterative, competitive peer-learning framework, which allows agents to refine and optimize their strategies through repeated interactions and feedback, thereby systematically evaluating their learning capabilities. To address the score saturation issue in current benchmarks, we introduce CATArena, a tournament-style evaluation platform featuring four diverse board and card games with open-ended scoring. By providing tasks without explicit upper score limits, CATArena enables continuous and dynamic evaluation of rapidly advancing agent capabilities. Experimental results and analyses involving both minimal and commercial code agents demonstrate that CATArena provides reliable, stable, and scalable benchmarking for core agent abilities, particularly learning ability and strategy coding.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents' learning ability through competitive interactions
Addressing score saturation in fixed-scenario benchmarks
Developing dynamic evaluation for self-improving agent strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative competitive peer-learning framework for agents
Tournament-style platform with open-ended scoring games
Continuous dynamic evaluation without upper score limits
🔎 Similar Papers
No similar papers found.