General Agent Evaluation

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic, domain-agnostic methodologies for evaluating general-purpose agents. It proposes the first unified evaluation framework for such agents, leveraging conceptual principles, an integrated protocol, and the Exgentic practice architecture to decouple agents from their environments. The framework enables fair, plug-and-play assessment of five representative agents across six heterogeneous environments without any domain-specific customization. By establishing general agent evaluation as a standalone research objective, the study introduces an open leaderboard that overcomes the limitations of existing benchmarks, which typically rely on environment-specific tuning. Experimental results demonstrate that unmodified general agents can effectively generalize across diverse environments, achieving performance comparable to specialized models. The evaluation protocol, framework, and leaderboard are all publicly released.

Technology Category

Application Category

📝 Abstract
The promise of general-purpose agents - systems that perform tasks in unfamiliar environments without domain-specific engineering - remains largely unrealized. Existing agents are predominantly specialized, and while emerging implementations like OpenAI SDK Agent and Claude Code hint at broader capabilities, no systematic evaluation of their general performance has been pursued. Current agentic benchmarks assume domain-specific integration, encoding task information in ways that preclude fair evaluation of general agents. This paper frames general-agent evaluation as a first-class research objective. We propose conceptual principles for such evaluation, a Unified Protocol enabling agent-benchmark integration, and Exgentic - a practical framework for general agent evaluation. We benchmark five prominent agent implementations across six environments as the first Open General Agent Leaderboard. Our experiments show that general agents generalize across diverse environments, achieving performance comparable to domain-specific agents without any environment-specific tuning. We release our evaluation protocol, framework, and leaderboard to establish a foundation for systematic research on general-purpose agents.
Problem

Research questions and friction points this paper is trying to address.

general agent
agent evaluation
benchmarking
general-purpose agents
systematic evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

General Agent Evaluation
Unified Protocol
Exgentic
Open General Agent Leaderboard
Domain-Generalization