🤖 AI Summary
Existing large language model (LLM) evaluation benchmarks suffer from closed design, high risk of data contamination, prohibitive maintenance costs, and inflexibility in tracking evolving model capabilities. To address these limitations, we propose the Continual Assessment Network (CAEN), a novel open, automated, and scalable evaluation paradigm powered by multi-agent collaboration. CAEN employs role-based agent specialization, interactive procedural data generation, and cascaded evaluation routing to enable longitudinal performance measurement and task-agnostic continual capability tracking. It seamlessly integrates with existing benchmarks and supports adaptive topology customization. Experiments across nine open-ended tasks and twenty-three LLMs demonstrate that CAEN achieves evaluation accuracy on par with conventional benchmarks while significantly reducing both data and computational overhead—entirely without human intervention. Moreover, CAEN exhibits superior scalability and extensibility, enabling sustainable, future-proof LLM assessment.
📝 Abstract
Hundreds of benchmarks dedicated to evaluating large models from multiple perspectives have been presented over the past few years. Albeit substantial efforts, most of them remain closed-ended and are prone to overfitting due to the potential data contamination in the ever-growing training corpus of large models, thereby undermining the credibility of the evaluation. Moreover, the increasing scale and scope of current benchmarks with transient metrics, as well as the heavily human-dependent curation procedure, pose significant challenges for timely maintenance and adaptation to gauge the advancing capabilities of large models. In this paper, we introduce MACEval, a Multi-Agent Continual Evaluation network for dynamic evaluation of large models, and define a new set of metrics to quantify performance longitudinally and sustainably. MACEval adopts an interactive and autonomous evaluation mode that employs role assignment, in-process data generation, and evaluation routing through a cascaded agent network. Extensive experiments on 9 open-ended tasks with 23 participating large models demonstrate that MACEval is (1) human-free and automatic, mitigating laborious result processing with inter-agent judgment guided; (2) efficient and economical, reducing a considerable amount of data and overhead to obtain similar results compared to related benchmarks; and (3) flexible and scalable, migrating or integrating existing benchmarks via customized evaluation topologies. We hope that MACEval can broaden future directions of large model evaluation.