WiS Platform: Enhancing Evaluation of LLM-Based Multi-Agent Systems Through Game-Based Analysis

📅 2024-12-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-driven multi-agent system (MAS) evaluations suffer from poor reproducibility, coarse-grained analysis, and difficulties in cross-model comparison. To address these challenges, this work introduces an open-source evaluation platform grounded in the “Who is Spy?” game, pioneering a gamified, dynamic evaluation paradigm. The platform enables unified integration of both open- and closed-weight LLMs and supports fine-grained, strategy-level quantification—specifically across attack, defense, and reasoning behaviors. It incorporates Hugging Face model interfacing, a real-time web leaderboard, a configurable rule engine, interactive log analytics, and win-rate attribution modeling. Empirical validation across 20+ state-of-the-art LLMs demonstrates the platform’s efficacy in uncovering significant behavioral disparities in collaboration, deception, and logical reasoning. Consequently, it substantially enhances interpretability, reproducibility, and横向 comparability of MAS evaluations.

Technology Category

Application Category

📝 Abstract
Recent advancements in autonomous multi-agent systems (MAS) based on large language models (LLMs) have enhanced the application scenarios and improved the capability of LLMs to handle complex tasks. Despite demonstrating effectiveness, existing studies still evidently struggle to evaluate, analysis, and reproducibility of LLM-based MAS. In this paper, to facilitate the research on LLM-based MAS, we introduce an open, scalable, and real-time updated platform for accessing and analyzing the LLM-based MAS based on the games Who is Spy?"(WiS). Our platform is featured with three main worths: (1) a unified model evaluate interface that supports models available on Hugging Face; (2) real-time updated leaderboard for model evaluation; (3) a comprehensive evaluation covering game-winning rates, attacking, defense strategies, and reasoning of LLMs. To rigorously test WiS, we conduct extensive experiments coverage of various open- and closed-source LLMs, we find that different agents exhibit distinct and intriguing behaviors in the game. The experimental results demonstrate the effectiveness and efficiency of our platform in evaluating LLM-based MAS. Our platform and its documentation are publicly available at url{https://whoisspy.ai/}
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM-based multi-agent systems effectively
Analyzing game strategies and reasoning in LLMs
Providing a scalable platform for MAS reproducibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Game-based platform for LLM multi-agent evaluation
Real-time leaderboard and unified model interface
Comprehensive metrics for agent behavior analysis
Chengwei Hu
Chengwei Hu
Fudan University
NLPKG
J
Jianhui Zheng
Taobao & Tmall Group of Alibaba
Yancheng He
Yancheng He
Alibaba Group
LLM
H
Hangyu Guo
Taobao & Tmall Group of Alibaba
Junguang Jiang
Junguang Jiang
Taobao & Tmall Group of Alibaba
H
Han Zhu
Taobao & Tmall Group of Alibaba
K
Kai Sun
Taobao & Tmall Group of Alibaba
Y
Yuning Jiang
Taobao & Tmall Group of Alibaba
W
Wenbo Su
Taobao & Tmall Group of Alibaba
B
Bo Zheng
Taobao & Tmall Group of Alibaba