🤖 AI Summary
Existing evaluation frameworks frequently conflate LLM-based chatbots with AI agents, leading to inappropriate benchmark selection. This paper addresses this conceptual ambiguity by distinguishing the two through an evolutionary lens—emphasizing fundamental differences in goal-directedness, environmental interaction, and capability emergence. Methodologically, we propose the first five-dimensional analytical framework (encompassing dimensions such as complex environments and multi-source instructions) and introduce a novel dual-axis taxonomy—“environment-driven” versus “capability-emergent”—to systematically map and classify 42 mainstream benchmarks. Further, we formulate a future-oriented, four-dimensional evaluation paradigm covering environment, agent, evaluator, and metrics. Drawing on systematic literature review, conceptual modeling, and taxonomic analysis, our work delivers a structured benchmark reference table and a practical implementation guide, explicitly delineating the applicability boundaries of each benchmark. This advances the scientific rigor and standardization of AI agent evaluation.
📝 Abstract
The advent of large language models (LLMs), such as GPT, Gemini, and DeepSeek, has significantly advanced natural language processing, giving rise to sophisticated chatbots capable of diverse language-related tasks. The transition from these traditional LLM chatbots to more advanced AI agents represents a pivotal evolutionary step. However, existing evaluation frameworks often blur the distinctions between LLM chatbots and AI agents, leading to confusion among researchers selecting appropriate benchmarks. To bridge this gap, this paper introduces a systematic analysis of current evaluation approaches, grounded in an evolutionary perspective. We provide a detailed analytical framework that clearly differentiates AI agents from LLM chatbots along five key aspects: complex environment, multi-source instructor, dynamic feedback, multi-modal perception, and advanced capability. Further, we categorize existing evaluation benchmarks based on external environments driving forces, and resulting advanced internal capabilities. For each category, we delineate relevant evaluation attributes, presented comprehensively in practical reference tables. Finally, we synthesize current trends and outline future evaluation methodologies through four critical lenses: environment, agent, evaluator, and metrics. Our findings offer actionable guidance for researchers, facilitating the informed selection and application of benchmarks in AI agent evaluation, thus fostering continued advancement in this rapidly evolving research domain.